Jan 28 15:18:28 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 15:18:28 crc restorecon[4589]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:28 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:18:29 crc restorecon[4589]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 15:18:30 crc kubenswrapper[4656]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:18:30 crc kubenswrapper[4656]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 15:18:30 crc kubenswrapper[4656]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:18:30 crc kubenswrapper[4656]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:18:30 crc kubenswrapper[4656]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 15:18:30 crc kubenswrapper[4656]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.620127 4656 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.637998 4656 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638040 4656 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638047 4656 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638053 4656 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638058 4656 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638062 4656 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638067 4656 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638072 4656 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638077 4656 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638082 4656 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638086 4656 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638091 4656 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638095 4656 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638100 4656 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638105 4656 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638110 4656 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638114 4656 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638118 4656 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638125 4656 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638131 4656 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638136 4656 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638141 4656 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638145 4656 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638150 4656 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638154 4656 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638159 4656 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638196 4656 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638201 4656 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638205 4656 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638210 4656 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638214 4656 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638218 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638222 4656 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638228 4656 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638234 4656 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638239 4656 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638244 4656 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638249 4656 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638253 4656 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638259 4656 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638264 4656 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638269 4656 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638274 4656 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638279 4656 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638284 4656 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638289 4656 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638293 4656 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638298 4656 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638302 4656 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638308 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638313 4656 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638318 4656 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638322 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638328 4656 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638334 4656 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638340 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638345 4656 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638351 4656 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638356 4656 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638361 4656 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638366 4656 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638370 4656 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638375 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638381 4656 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638386 4656 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638391 4656 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638395 4656 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638401 4656 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638405 4656 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638410 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.638413 4656 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649063 4656 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649246 4656 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649264 4656 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649271 4656 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649281 4656 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649287 4656 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649296 4656 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649303 4656 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649309 4656 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649314 4656 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649321 4656 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649326 4656 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649331 4656 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649337 4656 flags.go:64] FLAG: --cgroup-root="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649341 4656 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649346 4656 flags.go:64] FLAG: --client-ca-file="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649351 4656 flags.go:64] FLAG: --cloud-config="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649355 4656 flags.go:64] FLAG: --cloud-provider="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649359 4656 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649364 4656 flags.go:64] FLAG: --cluster-domain="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649369 4656 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649373 4656 flags.go:64] FLAG: --config-dir="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649377 4656 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649383 4656 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649389 4656 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649393 4656 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649398 4656 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649402 4656 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649407 4656 flags.go:64] FLAG: --contention-profiling="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649411 4656 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649416 4656 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649420 4656 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649425 4656 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649433 4656 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649437 4656 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649442 4656 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649447 4656 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649452 4656 flags.go:64] FLAG: --enable-server="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649456 4656 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649464 4656 flags.go:64] FLAG: --event-burst="100" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649471 4656 flags.go:64] FLAG: --event-qps="50" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649475 4656 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649480 4656 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649485 4656 flags.go:64] FLAG: --eviction-hard="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649490 4656 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649495 4656 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649499 4656 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649504 4656 flags.go:64] FLAG: --eviction-soft="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649508 4656 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649512 4656 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649516 4656 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649521 4656 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649525 4656 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649529 4656 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649534 4656 flags.go:64] FLAG: --feature-gates="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649539 4656 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649544 4656 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649549 4656 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649582 4656 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649587 4656 flags.go:64] FLAG: --healthz-port="10248" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649591 4656 flags.go:64] FLAG: --help="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649595 4656 flags.go:64] FLAG: --hostname-override="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649600 4656 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649604 4656 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649610 4656 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649614 4656 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649619 4656 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649624 4656 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649628 4656 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649632 4656 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649636 4656 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649641 4656 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649649 4656 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649653 4656 flags.go:64] FLAG: --kube-reserved="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649658 4656 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649663 4656 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649668 4656 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649672 4656 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649676 4656 flags.go:64] FLAG: --lock-file="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649680 4656 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649685 4656 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649689 4656 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649698 4656 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649703 4656 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649707 4656 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649711 4656 flags.go:64] FLAG: --logging-format="text" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649716 4656 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649721 4656 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649725 4656 flags.go:64] FLAG: --manifest-url="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649729 4656 flags.go:64] FLAG: --manifest-url-header="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649741 4656 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649746 4656 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649751 4656 flags.go:64] FLAG: --max-pods="110" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649756 4656 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649765 4656 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649769 4656 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649773 4656 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649778 4656 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649782 4656 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649787 4656 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649808 4656 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649812 4656 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649817 4656 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649821 4656 flags.go:64] FLAG: --pod-cidr="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649825 4656 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649832 4656 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649837 4656 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649841 4656 flags.go:64] FLAG: --pods-per-core="0" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649846 4656 flags.go:64] FLAG: --port="10250" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649850 4656 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649854 4656 flags.go:64] FLAG: --provider-id="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649859 4656 flags.go:64] FLAG: --qos-reserved="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649863 4656 flags.go:64] FLAG: --read-only-port="10255" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649867 4656 flags.go:64] FLAG: --register-node="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649871 4656 flags.go:64] FLAG: --register-schedulable="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649875 4656 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649885 4656 flags.go:64] FLAG: --registry-burst="10" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649890 4656 flags.go:64] FLAG: --registry-qps="5" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649894 4656 flags.go:64] FLAG: --reserved-cpus="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649898 4656 flags.go:64] FLAG: --reserved-memory="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649904 4656 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649908 4656 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649913 4656 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649917 4656 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649921 4656 flags.go:64] FLAG: --runonce="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649925 4656 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649930 4656 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649938 4656 flags.go:64] FLAG: --seccomp-default="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649942 4656 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649947 4656 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649953 4656 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649958 4656 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649963 4656 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649967 4656 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649971 4656 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649976 4656 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649980 4656 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649985 4656 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649989 4656 flags.go:64] FLAG: --system-cgroups="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.649993 4656 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650001 4656 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650005 4656 flags.go:64] FLAG: --tls-cert-file="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650010 4656 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650015 4656 flags.go:64] FLAG: --tls-min-version="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650020 4656 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650025 4656 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650029 4656 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650033 4656 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650037 4656 flags.go:64] FLAG: --v="2" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650044 4656 flags.go:64] FLAG: --version="false" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650050 4656 flags.go:64] FLAG: --vmodule="" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650056 4656 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650060 4656 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650254 4656 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650264 4656 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650269 4656 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650274 4656 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650279 4656 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650285 4656 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650293 4656 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650298 4656 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650303 4656 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650307 4656 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650311 4656 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650315 4656 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650320 4656 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650326 4656 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650332 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650336 4656 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650341 4656 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650346 4656 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650352 4656 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650357 4656 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650362 4656 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650366 4656 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650370 4656 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650374 4656 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650381 4656 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650387 4656 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650392 4656 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650398 4656 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650403 4656 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650408 4656 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650412 4656 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650417 4656 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650421 4656 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650426 4656 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650430 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650435 4656 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650439 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650443 4656 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650450 4656 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650454 4656 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650458 4656 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650463 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650467 4656 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650471 4656 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650476 4656 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650480 4656 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650484 4656 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650488 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650492 4656 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650497 4656 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650501 4656 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650505 4656 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650512 4656 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650517 4656 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650522 4656 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650528 4656 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650532 4656 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650536 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650540 4656 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650543 4656 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650548 4656 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650552 4656 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650556 4656 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650561 4656 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650565 4656 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650569 4656 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650573 4656 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650577 4656 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650582 4656 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650586 4656 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.650593 4656 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.650624 4656 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.696182 4656 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.696246 4656 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696336 4656 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696348 4656 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696353 4656 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696358 4656 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696363 4656 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696367 4656 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696372 4656 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696378 4656 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696385 4656 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696395 4656 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696402 4656 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696409 4656 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696416 4656 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696423 4656 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696428 4656 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696433 4656 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696438 4656 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696443 4656 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696447 4656 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696453 4656 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696457 4656 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696462 4656 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696467 4656 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696471 4656 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696476 4656 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696481 4656 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696486 4656 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696490 4656 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696495 4656 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696499 4656 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696505 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696510 4656 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696515 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696519 4656 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696525 4656 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696531 4656 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696535 4656 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696540 4656 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696545 4656 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696550 4656 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696556 4656 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696561 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696565 4656 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696570 4656 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696575 4656 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696582 4656 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696588 4656 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696595 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696600 4656 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696606 4656 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696612 4656 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696617 4656 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696622 4656 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696627 4656 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696632 4656 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696637 4656 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696642 4656 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696647 4656 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696651 4656 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696656 4656 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696660 4656 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696665 4656 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696669 4656 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696673 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696678 4656 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696682 4656 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696689 4656 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696694 4656 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696700 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696705 4656 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696711 4656 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.696721 4656 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696863 4656 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696872 4656 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696877 4656 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696882 4656 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696887 4656 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696892 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696897 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696901 4656 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696906 4656 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696912 4656 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696917 4656 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696921 4656 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696926 4656 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696930 4656 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696935 4656 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696939 4656 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696944 4656 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696948 4656 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696954 4656 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696960 4656 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696965 4656 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696970 4656 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696975 4656 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696979 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696984 4656 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696989 4656 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696993 4656 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.696998 4656 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697002 4656 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697006 4656 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697011 4656 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697016 4656 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697022 4656 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697027 4656 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697033 4656 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697037 4656 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697041 4656 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697047 4656 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697052 4656 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697056 4656 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697061 4656 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697065 4656 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697070 4656 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697074 4656 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697079 4656 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697083 4656 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697088 4656 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697092 4656 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697097 4656 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697101 4656 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697106 4656 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697110 4656 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697114 4656 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697119 4656 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697123 4656 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697129 4656 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697135 4656 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697139 4656 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697144 4656 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697149 4656 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697154 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697175 4656 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697181 4656 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697188 4656 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697193 4656 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697199 4656 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697205 4656 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697210 4656 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697215 4656 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697221 4656 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:18:30 crc kubenswrapper[4656]: W0128 15:18:30.697227 4656 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.697236 4656 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.699238 4656 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.718098 4656 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.718261 4656 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.721173 4656 server.go:997] "Starting client certificate rotation" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.721219 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.742942 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-15 07:46:53.849058482 +0000 UTC Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.743091 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.881747 4656 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.886688 4656 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:18:30 crc kubenswrapper[4656]: E0128 15:18:30.892961 4656 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.916062 4656 log.go:25] "Validated CRI v1 runtime API" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.986309 4656 log.go:25] "Validated CRI v1 image API" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.988151 4656 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.994704 4656 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-15-12-22-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 15:18:30 crc kubenswrapper[4656]: I0128 15:18:30.994815 4656 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.006200 4656 manager.go:217] Machine: {Timestamp:2026-01-28 15:18:31.00348316 +0000 UTC m=+1.511653984 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a40465ae-d87c-4dd5-a6fc-ca512905e140 BootID:c05dbb0a-1aab-49df-9964-1b1f0273dfec Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:6f:33:b5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:6f:33:b5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ef:e2:fd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:46:87:36 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a6:33:8d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:28:77:09 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9a:98:3c:fe:c6:1a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e6:df:37:20:9b:77 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.006646 4656 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.006934 4656 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.008682 4656 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.008940 4656 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.008985 4656 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.009470 4656 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.009487 4656 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.010722 4656 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.010767 4656 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.011058 4656 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.011752 4656 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.043470 4656 kubelet.go:418] "Attempting to sync node with API server" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.043535 4656 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.043602 4656 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.043631 4656 kubelet.go:324] "Adding apiserver pod source" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.043655 4656 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.058905 4656 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 15:18:31 crc kubenswrapper[4656]: W0128 15:18:31.059930 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.060058 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:31 crc kubenswrapper[4656]: W0128 15:18:31.059935 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.060117 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.060570 4656 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.062377 4656 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064319 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064361 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064378 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064392 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064414 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064429 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064449 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064470 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064486 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064502 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064523 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.064538 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.068325 4656 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.070013 4656 server.go:1280] "Started kubelet" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.071425 4656 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.071417 4656 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.072353 4656 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 15:18:31 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.073324 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.082679 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.082765 4656 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.083485 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:14:08.645487986 +0000 UTC Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.084760 4656 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.086908 4656 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.087056 4656 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.087066 4656 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.088385 4656 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.088563 4656 factory.go:55] Registering systemd factory Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.088647 4656 factory.go:221] Registration of the systemd container factory successfully Jan 28 15:18:31 crc kubenswrapper[4656]: W0128 15:18:31.088819 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.088902 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.088414 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="200ms" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.090215 4656 server.go:460] "Adding debug handlers to kubelet server" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.093276 4656 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eee1ccd0e9ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:18:31.069966014 +0000 UTC m=+1.578136828,LastTimestamp:2026-01-28 15:18:31.069966014 +0000 UTC m=+1.578136828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.102710 4656 factory.go:153] Registering CRI-O factory Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.102759 4656 factory.go:221] Registration of the crio container factory successfully Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.102809 4656 factory.go:103] Registering Raw factory Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.102844 4656 manager.go:1196] Started watching for new ooms in manager Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.103662 4656 manager.go:319] Starting recovery of all containers Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.107819 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.107879 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.107891 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.107900 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.107912 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.107922 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.107935 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.107946 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.107958 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108006 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108022 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108033 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108045 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108059 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108073 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108085 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108097 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108109 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108120 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108138 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108154 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108190 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108209 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108229 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108243 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108255 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108272 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108282 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108293 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108302 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108346 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108359 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108369 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108382 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108393 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108403 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108413 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108424 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108433 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108446 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108458 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108466 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108476 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108487 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108497 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108507 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108523 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108533 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108543 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108554 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108563 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108621 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108661 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108679 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108694 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108706 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108718 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108731 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108743 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108753 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108766 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108780 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108794 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108808 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108851 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108868 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108880 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108893 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108910 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108922 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108938 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108951 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108965 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108980 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.108994 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109009 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109024 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109039 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109055 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109067 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109080 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109101 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109114 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109127 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109140 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109174 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109187 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109199 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109210 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109221 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109233 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109245 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109259 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109274 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109292 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109306 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109320 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109335 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.109348 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.118953 4656 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.119682 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.119744 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.119762 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.119776 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.119796 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.122563 4656 manager.go:324] Recovery completed Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.126236 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.126406 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.126477 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.126559 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.126624 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.126695 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.126754 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.126865 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.126939 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127004 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127079 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127141 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127230 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127291 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127360 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127423 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127488 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127552 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127611 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127678 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127739 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127807 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127866 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127923 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.127988 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128053 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128121 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128223 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128302 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128391 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128455 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128519 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128581 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128639 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128695 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128760 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128824 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128897 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.128960 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129020 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129084 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129146 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129221 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129279 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129350 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129414 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129471 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129529 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129585 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129643 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129699 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129761 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129830 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129886 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.129953 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130015 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130085 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130175 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130237 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130296 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130352 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130419 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130482 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130540 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130600 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130658 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130714 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130775 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130831 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130889 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.130948 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131005 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131066 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131125 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131197 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131258 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131313 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131378 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131434 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131491 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131547 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131602 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131662 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131719 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131777 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131836 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131891 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.131994 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132057 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132115 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132209 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132295 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132358 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132426 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132485 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132545 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132601 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132656 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132718 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132777 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132838 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132905 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.132984 4656 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.133052 4656 reconstruct.go:97] "Volume reconstruction finished" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.133114 4656 reconciler.go:26] "Reconciler: start to sync state" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.135731 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.138217 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.138271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.138282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.144620 4656 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.144817 4656 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.144907 4656 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.167552 4656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.169185 4656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.169267 4656 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.169388 4656 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.169452 4656 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 15:18:31 crc kubenswrapper[4656]: W0128 15:18:31.170591 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.170661 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.184897 4656 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.270393 4656 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.285856 4656 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.291120 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="400ms" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.312424 4656 policy_none.go:49] "None policy: Start" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.313881 4656 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.313927 4656 state_mem.go:35] "Initializing new in-memory state store" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.386321 4656 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.436693 4656 manager.go:334] "Starting Device Plugin manager" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.436869 4656 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.436893 4656 server.go:79] "Starting device plugin registration server" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.437559 4656 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.437594 4656 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.437914 4656 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.438004 4656 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.438017 4656 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.449997 4656 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.471341 4656 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.471660 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.473908 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.474026 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.474055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.474497 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.475735 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.475786 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.478203 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.478260 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.478278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.478372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.478435 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.478449 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.478516 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.478752 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.478834 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.479562 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.479598 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.479611 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.479728 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.479951 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.480003 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.480288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.480327 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.480342 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.480586 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.480608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.480618 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.480712 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.481116 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.481194 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.481461 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.481490 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.481506 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.481683 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.481720 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.482584 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.482612 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.482623 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.482757 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.482816 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.482831 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.482863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.482884 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.482895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.538697 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.538756 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.538827 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.538873 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.538898 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.538983 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539029 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539060 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539416 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539501 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539556 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539592 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539617 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539639 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539664 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.539710 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.540806 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.540846 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.540862 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.540894 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.541272 4656 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.640752 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.640879 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.640909 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.640932 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.640957 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.640977 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.640995 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641002 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641017 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641214 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641313 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641056 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641061 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641081 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641082 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641279 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641029 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641084 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641675 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641442 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641708 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641747 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641767 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641785 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641791 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641829 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641849 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641875 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.641895 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.642044 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.692768 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="800ms" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.741824 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.743358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.743422 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.743435 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.743474 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:18:31 crc kubenswrapper[4656]: E0128 15:18:31.743975 4656 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.809708 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.827823 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.848929 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.864969 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: I0128 15:18:31.871221 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:18:31 crc kubenswrapper[4656]: W0128 15:18:31.923241 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f04b43ed4bfc15b44a35822f4c4f26af7f4c5a85bd9a844d145d478f27862e87 WatchSource:0}: Error finding container f04b43ed4bfc15b44a35822f4c4f26af7f4c5a85bd9a844d145d478f27862e87: Status 404 returned error can't find the container with id f04b43ed4bfc15b44a35822f4c4f26af7f4c5a85bd9a844d145d478f27862e87 Jan 28 15:18:31 crc kubenswrapper[4656]: W0128 15:18:31.934005 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-810214e1f021c1284f77cce04ddd752a6fe134fb74e06a6ea0964bb2317cae22 WatchSource:0}: Error finding container 810214e1f021c1284f77cce04ddd752a6fe134fb74e06a6ea0964bb2317cae22: Status 404 returned error can't find the container with id 810214e1f021c1284f77cce04ddd752a6fe134fb74e06a6ea0964bb2317cae22 Jan 28 15:18:31 crc kubenswrapper[4656]: W0128 15:18:31.937838 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1f56c31a2ac15d493f033d534d4027cc5c93306845c23859f8a8afd33703df4a WatchSource:0}: Error finding container 1f56c31a2ac15d493f033d534d4027cc5c93306845c23859f8a8afd33703df4a: Status 404 returned error can't find the container with id 1f56c31a2ac15d493f033d534d4027cc5c93306845c23859f8a8afd33703df4a Jan 28 15:18:31 crc kubenswrapper[4656]: W0128 15:18:31.958116 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d0704ba1e8c262550ec69f38622acdde9e6025de34ed56e29bc3d6be796d7d3d WatchSource:0}: Error finding container d0704ba1e8c262550ec69f38622acdde9e6025de34ed56e29bc3d6be796d7d3d: Status 404 returned error can't find the container with id d0704ba1e8c262550ec69f38622acdde9e6025de34ed56e29bc3d6be796d7d3d Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.074836 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.083940 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:53:37.435128802 +0000 UTC Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.144757 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.146419 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.146462 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.146475 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.146504 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:18:32 crc kubenswrapper[4656]: E0128 15:18:32.147227 4656 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.175259 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f04b43ed4bfc15b44a35822f4c4f26af7f4c5a85bd9a844d145d478f27862e87"} Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.176590 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1f56c31a2ac15d493f033d534d4027cc5c93306845c23859f8a8afd33703df4a"} Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.177593 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"810214e1f021c1284f77cce04ddd752a6fe134fb74e06a6ea0964bb2317cae22"} Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.178599 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d0704ba1e8c262550ec69f38622acdde9e6025de34ed56e29bc3d6be796d7d3d"} Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.179545 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7cff113c4d194c1a13dbf65974700ff7d37aee555b520471f61972d90d0806c0"} Jan 28 15:18:32 crc kubenswrapper[4656]: W0128 15:18:32.182005 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:32 crc kubenswrapper[4656]: E0128 15:18:32.182090 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:32 crc kubenswrapper[4656]: W0128 15:18:32.233483 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:32 crc kubenswrapper[4656]: E0128 15:18:32.233636 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:32 crc kubenswrapper[4656]: E0128 15:18:32.494212 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="1.6s" Jan 28 15:18:32 crc kubenswrapper[4656]: W0128 15:18:32.575876 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:32 crc kubenswrapper[4656]: E0128 15:18:32.575982 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:32 crc kubenswrapper[4656]: W0128 15:18:32.592299 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:32 crc kubenswrapper[4656]: E0128 15:18:32.592397 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.947384 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.948780 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.948857 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.948874 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.948937 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:18:32 crc kubenswrapper[4656]: E0128 15:18:32.949676 4656 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 28 15:18:32 crc kubenswrapper[4656]: I0128 15:18:32.954734 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:18:32 crc kubenswrapper[4656]: E0128 15:18:32.955942 4656 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:33 crc kubenswrapper[4656]: I0128 15:18:33.074515 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:33 crc kubenswrapper[4656]: I0128 15:18:33.084744 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:09:45.600691226 +0000 UTC Jan 28 15:18:34 crc kubenswrapper[4656]: I0128 15:18:34.074119 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:34 crc kubenswrapper[4656]: I0128 15:18:34.085248 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:53:46.490985666 +0000 UTC Jan 28 15:18:34 crc kubenswrapper[4656]: E0128 15:18:34.095572 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="3.2s" Jan 28 15:18:34 crc kubenswrapper[4656]: W0128 15:18:34.238363 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:34 crc kubenswrapper[4656]: E0128 15:18:34.238459 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:34 crc kubenswrapper[4656]: I0128 15:18:34.549892 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:34 crc kubenswrapper[4656]: I0128 15:18:34.556900 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:34 crc kubenswrapper[4656]: I0128 15:18:34.557033 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:34 crc kubenswrapper[4656]: I0128 15:18:34.557099 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:34 crc kubenswrapper[4656]: I0128 15:18:34.557220 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:18:34 crc kubenswrapper[4656]: E0128 15:18:34.557764 4656 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.074619 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.085646 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:00:40.621393991 +0000 UTC Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.190922 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d"} Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.191535 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.193404 4656 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072" exitCode=0 Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.193527 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072"} Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.193744 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.195504 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.195651 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.195785 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.195785 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.195995 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.196012 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.196099 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f"} Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.196506 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.197966 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.198007 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.198018 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.199385 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c"} Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.201456 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197"} Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.201660 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.202879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.202919 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:35 crc kubenswrapper[4656]: I0128 15:18:35.202956 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:35 crc kubenswrapper[4656]: W0128 15:18:35.233800 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:35 crc kubenswrapper[4656]: E0128 15:18:35.234019 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:35 crc kubenswrapper[4656]: W0128 15:18:35.339643 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:35 crc kubenswrapper[4656]: E0128 15:18:35.339806 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:35 crc kubenswrapper[4656]: W0128 15:18:35.760525 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:35 crc kubenswrapper[4656]: E0128 15:18:35.760899 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:35 crc kubenswrapper[4656]: E0128 15:18:35.924453 4656 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eee1ccd0e9ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:18:31.069966014 +0000 UTC m=+1.578136828,LastTimestamp:2026-01-28 15:18:31.069966014 +0000 UTC m=+1.578136828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.074553 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.086620 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:38:37.001351221 +0000 UTC Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.205404 4656 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d" exitCode=0 Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.205490 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d"} Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.205680 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.206528 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.206552 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.206585 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.208637 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.208652 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f"} Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.209855 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.209881 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.209896 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.210576 4656 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f" exitCode=0 Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.210630 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f"} Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.210660 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.211783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.211814 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.211827 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.213450 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"826db300be47e8ade08ecd18880a53f4ce70b3b8f4ffbcd327fec2f952b0168d"} Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.213481 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cadf761b9301aaeea19fad51cfac7b4aa80f49ae5e0fadf4eababc2c5bb945b3"} Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.215627 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197" exitCode=0 Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.215658 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197"} Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.215734 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.216459 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.216510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.216519 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.218368 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.219244 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.219272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:36 crc kubenswrapper[4656]: I0128 15:18:36.219283 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.074634 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.087633 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:22:18.34404465 +0000 UTC Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.221468 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f"} Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.223528 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a"} Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.226276 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.226574 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e5edae8f60ea42bc0f7cee0c415afdb634b13222f6a9b1bbac9e15d6b3ec3867"} Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.227282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.227323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.227336 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4656]: E0128 15:18:37.296759 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="6.4s" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.342530 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:18:37 crc kubenswrapper[4656]: E0128 15:18:37.344297 4656 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.757955 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.759756 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.759808 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.759820 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:37 crc kubenswrapper[4656]: I0128 15:18:37.759853 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:18:37 crc kubenswrapper[4656]: E0128 15:18:37.760497 4656 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 28 15:18:38 crc kubenswrapper[4656]: I0128 15:18:38.074425 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:38 crc kubenswrapper[4656]: I0128 15:18:38.088588 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:32:33.872937042 +0000 UTC Jan 28 15:18:38 crc kubenswrapper[4656]: I0128 15:18:38.230261 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f"} Jan 28 15:18:38 crc kubenswrapper[4656]: W0128 15:18:38.977931 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:38 crc kubenswrapper[4656]: E0128 15:18:38.978054 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.077923 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.107761 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:51:51.596535413 +0000 UTC Jan 28 15:18:39 crc kubenswrapper[4656]: W0128 15:18:39.129312 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:39 crc kubenswrapper[4656]: E0128 15:18:39.129419 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:39 crc kubenswrapper[4656]: W0128 15:18:39.245730 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:39 crc kubenswrapper[4656]: E0128 15:18:39.245844 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.249110 4656 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f" exitCode=0 Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.249207 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f"} Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.249254 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.249254 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.250085 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.250108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.250118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.250086 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.250372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:39 crc kubenswrapper[4656]: I0128 15:18:39.250466 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:40 crc kubenswrapper[4656]: I0128 15:18:40.074543 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:40 crc kubenswrapper[4656]: I0128 15:18:40.107959 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:41:03.204253808 +0000 UTC Jan 28 15:18:40 crc kubenswrapper[4656]: I0128 15:18:40.253318 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5"} Jan 28 15:18:40 crc kubenswrapper[4656]: I0128 15:18:40.255200 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1"} Jan 28 15:18:40 crc kubenswrapper[4656]: I0128 15:18:40.257970 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0"} Jan 28 15:18:40 crc kubenswrapper[4656]: W0128 15:18:40.432723 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:40 crc kubenswrapper[4656]: E0128 15:18:40.432859 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.074485 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.108729 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:29:53.623917663 +0000 UTC Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.265113 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf"} Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.267950 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c"} Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.269459 4656 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0" exitCode=0 Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.269496 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0"} Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.269671 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.271105 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.271202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:41 crc kubenswrapper[4656]: I0128 15:18:41.271222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:41 crc kubenswrapper[4656]: E0128 15:18:41.451128 4656 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:18:42 crc kubenswrapper[4656]: I0128 15:18:42.074127 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:42 crc kubenswrapper[4656]: I0128 15:18:42.109644 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:34:30.881281951 +0000 UTC Jan 28 15:18:42 crc kubenswrapper[4656]: I0128 15:18:42.271529 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:42 crc kubenswrapper[4656]: I0128 15:18:42.272619 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:42 crc kubenswrapper[4656]: I0128 15:18:42.272674 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:42 crc kubenswrapper[4656]: I0128 15:18:42.272687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:43 crc kubenswrapper[4656]: I0128 15:18:43.074627 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:43 crc kubenswrapper[4656]: I0128 15:18:43.110755 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:33:56.937088202 +0000 UTC Jan 28 15:18:43 crc kubenswrapper[4656]: I0128 15:18:43.277912 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671"} Jan 28 15:18:43 crc kubenswrapper[4656]: I0128 15:18:43.281266 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d25512a305427a2fbc0dc915cc3dfb21cadd3db472a2764f1b5a686d60ec422e"} Jan 28 15:18:43 crc kubenswrapper[4656]: E0128 15:18:43.702979 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="7s" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.083198 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.111071 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:51:22.608648254 +0000 UTC Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.161380 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.163019 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.163055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.163067 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.163090 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:18:44 crc kubenswrapper[4656]: E0128 15:18:44.163540 4656 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.346616 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c80831597489860182070ea4c6f6734b2feca0011557f863624a9181f66fa7c2"} Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.346831 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.347999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.348030 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.348041 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.391374 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af4f0a931b81775cf3bcadec1c2d278079e6d6c08334a5d412f957ea057000a3"} Jan 28 15:18:44 crc kubenswrapper[4656]: I0128 15:18:44.391589 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"27f7605b956da7648bb4ea64104ebddadf45a4297723b28d1813ec330122f9de"} Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.123701 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:02:22.178904379 +0000 UTC Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.124437 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.211238 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.261484 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.261833 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.274729 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.274768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.274777 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.398619 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"33daf9c576813489c0d122bf8b57511d33c442f5c4f81c8a1ba17b349d04d4da"} Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.398725 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.398846 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.399594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.399637 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:45 crc kubenswrapper[4656]: I0128 15:18:45.399651 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:45 crc kubenswrapper[4656]: E0128 15:18:45.925682 4656 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eee1ccd0e9ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:18:31.069966014 +0000 UTC m=+1.578136828,LastTimestamp:2026-01-28 15:18:31.069966014 +0000 UTC m=+1.578136828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.090412 4656 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.094741 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.094940 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.096345 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.096383 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.096396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.104765 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.111408 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:18:46 crc kubenswrapper[4656]: E0128 15:18:46.112712 4656 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.124325 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:47:18.745705788 +0000 UTC Jan 28 15:18:46 crc kubenswrapper[4656]: W0128 15:18:46.201624 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:46 crc kubenswrapper[4656]: E0128 15:18:46.202092 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.405468 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.408679 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c80831597489860182070ea4c6f6734b2feca0011557f863624a9181f66fa7c2" exitCode=255 Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.408884 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.408886 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c80831597489860182070ea4c6f6734b2feca0011557f863624a9181f66fa7c2"} Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.409820 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.409844 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.409853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.410365 4656 scope.go:117] "RemoveContainer" containerID="c80831597489860182070ea4c6f6734b2feca0011557f863624a9181f66fa7c2" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.415443 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.416396 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.416736 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e9be91e89edee3d65aa4855a9e6e4354e182726e95ba57165fbebc4e1b334a57"} Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.417467 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.417504 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.417514 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.417781 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.417859 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.417876 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.422074 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:46 crc kubenswrapper[4656]: I0128 15:18:46.625335 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:46 crc kubenswrapper[4656]: W0128 15:18:46.929218 4656 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 28 15:18:46 crc kubenswrapper[4656]: E0128 15:18:46.929349 4656 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.125611 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:41:27.142918838 +0000 UTC Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.419674 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.421202 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3"} Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.421281 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.421286 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.421466 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.422295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.422323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.422331 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.422379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.422400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.422410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.422558 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.422583 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.422594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:47 crc kubenswrapper[4656]: I0128 15:18:47.959185 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.120863 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.126407 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:11:30.023379786 +0000 UTC Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.336909 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.423420 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.423471 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.423479 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.423429 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.424307 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.424346 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.424369 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.424380 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.424349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.424503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.425403 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.425439 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.425450 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.836400 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.836694 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.838244 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.838319 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:48 crc kubenswrapper[4656]: I0128 15:18:48.838335 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.126729 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:08:45.863582201 +0000 UTC Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.425752 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.425812 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.427360 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.427396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.427408 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.427525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.427582 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.427594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.432212 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.432367 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.434407 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.434899 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:49 crc kubenswrapper[4656]: I0128 15:18:49.434968 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:50 crc kubenswrapper[4656]: I0128 15:18:50.127264 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 02:51:15.424615471 +0000 UTC Jan 28 15:18:51 crc kubenswrapper[4656]: I0128 15:18:51.128535 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:21:15.466973273 +0000 UTC Jan 28 15:18:51 crc kubenswrapper[4656]: I0128 15:18:51.163696 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:51 crc kubenswrapper[4656]: I0128 15:18:51.165475 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:51 crc kubenswrapper[4656]: I0128 15:18:51.165507 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:51 crc kubenswrapper[4656]: I0128 15:18:51.165516 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:51 crc kubenswrapper[4656]: I0128 15:18:51.165537 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:18:51 crc kubenswrapper[4656]: I0128 15:18:51.337471 4656 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:18:51 crc kubenswrapper[4656]: I0128 15:18:51.337565 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:18:51 crc kubenswrapper[4656]: E0128 15:18:51.451697 4656 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:18:52 crc kubenswrapper[4656]: I0128 15:18:52.129714 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:55:22.039590222 +0000 UTC Jan 28 15:18:53 crc kubenswrapper[4656]: I0128 15:18:53.130691 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:37:55.379872728 +0000 UTC Jan 28 15:18:54 crc kubenswrapper[4656]: I0128 15:18:54.131779 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:00:17.595086427 +0000 UTC Jan 28 15:18:55 crc kubenswrapper[4656]: I0128 15:18:55.132443 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:13:55.328286996 +0000 UTC Jan 28 15:18:56 crc kubenswrapper[4656]: I0128 15:18:56.211254 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:22:15.614681226 +0000 UTC Jan 28 15:18:56 crc kubenswrapper[4656]: I0128 15:18:56.423316 4656 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:18:56 crc kubenswrapper[4656]: I0128 15:18:56.423466 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:18:56 crc kubenswrapper[4656]: I0128 15:18:56.428872 4656 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:18:56 crc kubenswrapper[4656]: I0128 15:18:56.428967 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:18:57 crc kubenswrapper[4656]: I0128 15:18:57.212234 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 21:36:49.339974073 +0000 UTC Jan 28 15:18:57 crc kubenswrapper[4656]: I0128 15:18:57.966277 4656 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]log ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]etcd ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/priority-and-fairness-filter ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-apiextensions-informers ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-apiextensions-controllers ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/crd-informer-synced ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-system-namespaces-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 28 15:18:57 crc kubenswrapper[4656]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/bootstrap-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/start-kube-aggregator-informers ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/apiservice-registration-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/apiservice-discovery-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]autoregister-completion ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/apiservice-openapi-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 28 15:18:57 crc kubenswrapper[4656]: livez check failed Jan 28 15:18:57 crc kubenswrapper[4656]: I0128 15:18:57.966348 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:18:58 crc kubenswrapper[4656]: I0128 15:18:58.212568 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:42:27.61859575 +0000 UTC Jan 28 15:18:58 crc kubenswrapper[4656]: I0128 15:18:58.721319 4656 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 15:18:58 crc kubenswrapper[4656]: I0128 15:18:58.721401 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 15:18:59 crc kubenswrapper[4656]: I0128 15:18:59.213596 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:15:58.814231702 +0000 UTC Jan 28 15:18:59 crc kubenswrapper[4656]: I0128 15:18:59.801044 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 15:18:59 crc kubenswrapper[4656]: I0128 15:18:59.801345 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:18:59 crc kubenswrapper[4656]: I0128 15:18:59.802603 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:18:59 crc kubenswrapper[4656]: I0128 15:18:59.802638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:18:59 crc kubenswrapper[4656]: I0128 15:18:59.802678 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:18:59 crc kubenswrapper[4656]: I0128 15:18:59.815299 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 15:19:00 crc kubenswrapper[4656]: I0128 15:19:00.214783 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:28:47.923143493 +0000 UTC Jan 28 15:19:00 crc kubenswrapper[4656]: I0128 15:19:00.707948 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:00 crc kubenswrapper[4656]: I0128 15:19:00.709336 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:00 crc kubenswrapper[4656]: I0128 15:19:00.709386 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:00 crc kubenswrapper[4656]: I0128 15:19:00.709402 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.215433 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:08:11.054388712 +0000 UTC Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.337437 4656 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.337546 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:19:01 crc kubenswrapper[4656]: E0128 15:19:01.409102 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.412403 4656 trace.go:236] Trace[647631526]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:18:49.045) (total time: 12366ms): Jan 28 15:19:01 crc kubenswrapper[4656]: Trace[647631526]: ---"Objects listed" error: 12366ms (15:19:01.412) Jan 28 15:19:01 crc kubenswrapper[4656]: Trace[647631526]: [12.366709148s] [12.366709148s] END Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.412442 4656 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.417113 4656 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 15:19:01 crc kubenswrapper[4656]: E0128 15:19:01.423369 4656 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.424722 4656 trace.go:236] Trace[1866615913]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:18:51.294) (total time: 10130ms): Jan 28 15:19:01 crc kubenswrapper[4656]: Trace[1866615913]: ---"Objects listed" error: 10126ms (15:19:01.420) Jan 28 15:19:01 crc kubenswrapper[4656]: Trace[1866615913]: [10.130015331s] [10.130015331s] END Jan 28 15:19:01 crc kubenswrapper[4656]: I0128 15:19:01.424750 4656 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:01 crc kubenswrapper[4656]: E0128 15:19:01.451854 4656 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.216074 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:24:51.171924567 +0000 UTC Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.717049 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.718110 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.720819 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" exitCode=255 Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.721043 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3"} Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.721381 4656 scope.go:117] "RemoveContainer" containerID="c80831597489860182070ea4c6f6734b2feca0011557f863624a9181f66fa7c2" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.721628 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.722855 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.723276 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.723377 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.725387 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:02 crc kubenswrapper[4656]: E0128 15:19:02.725869 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:19:02 crc kubenswrapper[4656]: I0128 15:19:02.965225 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.114449 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.129281 4656 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.217795 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:30:58.486037769 +0000 UTC Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.725501 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.726924 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.728181 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.728287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.728300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.729061 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:03 crc kubenswrapper[4656]: E0128 15:19:03.729306 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:19:03 crc kubenswrapper[4656]: I0128 15:19:03.732443 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.222017 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 15:25:50.849908215 +0000 UTC Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.614195 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.730500 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.735683 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.735771 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.736274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:04 crc kubenswrapper[4656]: I0128 15:19:04.737536 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:04 crc kubenswrapper[4656]: E0128 15:19:04.737856 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.328709 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:48:15.814490988 +0000 UTC Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.718739 4656 csr.go:261] certificate signing request csr-c58gn is approved, waiting to be issued Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.733794 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.735143 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.735207 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.735222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.736267 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:05 crc kubenswrapper[4656]: E0128 15:19:05.736505 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:19:05 crc kubenswrapper[4656]: I0128 15:19:05.795503 4656 csr.go:257] certificate signing request csr-c58gn is issued Jan 28 15:19:06 crc kubenswrapper[4656]: I0128 15:19:06.329688 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:42:59.813209511 +0000 UTC Jan 28 15:19:06 crc kubenswrapper[4656]: I0128 15:19:06.934747 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 15:14:05 +0000 UTC, rotation deadline is 2026-11-15 11:41:44.619582128 +0000 UTC Jan 28 15:19:06 crc kubenswrapper[4656]: I0128 15:19:06.934951 4656 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6980h22m37.684635685s for next certificate rotation Jan 28 15:19:07 crc kubenswrapper[4656]: I0128 15:19:07.330033 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:14:37.0372205 +0000 UTC Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.331449 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:26:42.911557512 +0000 UTC Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.343070 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.343979 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.346382 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.346560 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.346638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.348354 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.423777 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.426282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.426349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.426365 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.426595 4656 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.704350 4656 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.704795 4656 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 15:19:08 crc kubenswrapper[4656]: E0128 15:19:08.704854 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.719557 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.719993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.720151 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.720264 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.720345 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:08Z","lastTransitionTime":"2026-01-28T15:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:08 crc kubenswrapper[4656]: E0128 15:19:08.752758 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.758466 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.758718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.758796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.758895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.759001 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:08Z","lastTransitionTime":"2026-01-28T15:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:08 crc kubenswrapper[4656]: I0128 15:19:08.996909 4656 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:08.998357 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.110443 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.110539 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: E0128 15:19:09.141745 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166270 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166373 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166409 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.166431 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:09Z","lastTransitionTime":"2026-01-28T15:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.331930 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:09:09.238054842 +0000 UTC Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.754761 4656 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:09 crc kubenswrapper[4656]: E0128 15:19:09.779742 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786007 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786042 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786067 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.786076 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:09Z","lastTransitionTime":"2026-01-28T15:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:09 crc kubenswrapper[4656]: E0128 15:19:09.799629 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:09 crc kubenswrapper[4656]: E0128 15:19:09.799780 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802756 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.802773 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:09Z","lastTransitionTime":"2026-01-28T15:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.905905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.905969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.905980 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.905998 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:09 crc kubenswrapper[4656]: I0128 15:19:09.906010 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:09Z","lastTransitionTime":"2026-01-28T15:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009073 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009147 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009193 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009221 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.009281 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.112692 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.113031 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.113056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.113087 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.113382 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217198 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217213 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.217248 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.331498 4656 apiserver.go:52] "Watching apiserver" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.332251 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:10:15.464287738 +0000 UTC Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335422 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335479 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335538 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.335568 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.340320 4656 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.341437 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rpzjg","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-node-identity/network-node-identity-vrzqb","openshift-ovn-kubernetes/ovnkube-node-kwnzt","openshift-dns/node-resolver-c695w","openshift-machine-config-operator/machine-config-daemon-8llkk","openshift-multus/multus-additional-cni-plugins-854tp","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-image-registry/node-ca-55xm4","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.342417 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.343335 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.343385 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.343512 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.343346 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.344067 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.344197 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.344488 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.344649 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.344757 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.344793 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.345123 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.345350 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.345131 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.345422 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.346595 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.349142 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.351964 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.352161 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.353270 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.353401 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.353976 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.354011 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.353981 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.354577 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.354865 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355084 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355334 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355613 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355761 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.355638 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.363281 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.366883 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.367718 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.368140 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.368761 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.368786 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369066 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369380 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369540 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369670 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.369905 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.370350 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.370644 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.371033 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.371318 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.372231 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.372598 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.377491 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.396220 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.398669 4656 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.403451 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.422642 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440451 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440498 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440508 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.440553 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.447300 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.459463 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.459824 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.459974 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460077 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460192 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460312 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460427 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460543 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460649 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460756 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460860 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.460982 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461102 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461205 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461310 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461415 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461504 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461597 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461674 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461747 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461818 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461955 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462121 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462229 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462319 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462405 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462482 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462562 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462693 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462807 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462881 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462953 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463065 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463198 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463344 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463425 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463491 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463578 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463650 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463718 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463789 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463879 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463960 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464032 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464105 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464243 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464358 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464443 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464546 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464630 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464704 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464773 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464854 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464935 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465059 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465155 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465261 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473386 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473427 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473456 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473479 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473504 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473527 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473546 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473569 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473588 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473607 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473625 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473649 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473668 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473698 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473722 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473742 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473762 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473784 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473802 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473824 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473842 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473861 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473882 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474054 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474077 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474094 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474110 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474127 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474144 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474164 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474649 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474675 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474693 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474711 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474728 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474747 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474764 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474786 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474802 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474824 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474844 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474862 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474880 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474900 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474918 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474935 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474954 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474973 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.474990 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475008 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475026 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475045 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475073 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475093 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475111 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475130 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475152 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475174 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475394 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475411 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475429 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475446 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475463 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475481 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475498 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475515 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475534 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475552 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.475576 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718851 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718931 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718961 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718983 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.716110 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461441 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461434 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461696 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.461717 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462120 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462770 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.462972 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463122 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463324 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463756 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.463920 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464036 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464314 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.464783 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465107 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465565 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465612 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465839 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.465846 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.471243 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.473318 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.706741 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.707063 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.708073 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.710073 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718597 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718629 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718933 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.718937 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726361 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.719191 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.719286 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.719883 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.720713 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722237 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722288 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722449 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722633 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722817 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.722932 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.723584 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.723648 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.723891 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.724138 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.724304 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.724392 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.724751 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.725863 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726009 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.725880 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726598 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726632 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726762 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.726862 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.727033 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.727122 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.727334 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.727639 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728008 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728255 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728375 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728425 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728527 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728577 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728622 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728667 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728762 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.728988 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.729223 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.729713 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.729901 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.730153 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.730601 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.730706 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.731122 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732454 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.731440 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.731588 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.731921 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732052 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732310 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732511 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.732962 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733276 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733359 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733514 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733578 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733775 4656 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.733978 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.734166 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.734261 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.735068 4656 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.735218 4656 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.735295 4656 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.735329 4656 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.735492 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.235326713 +0000 UTC m=+41.743497517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.735553 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.735615 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736425 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736490 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736527 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736548 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736834 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737040 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737300 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737510 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737297 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737710 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.737896 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738035 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738116 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738121 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738092 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738074 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738362 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.738476 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739086 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.736489 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739479 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739514 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739538 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739560 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739583 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739607 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739631 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739660 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739656 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.739872 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740028 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740019 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740212 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740418 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741340 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741380 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740643 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741405 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740686 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741432 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741455 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741480 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741500 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741560 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741588 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741616 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741642 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741672 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741700 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741733 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741773 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741806 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741843 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741861 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741766 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742716 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742768 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742794 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742826 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742854 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742879 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742921 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742954 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742999 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743022 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743045 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743078 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743199 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743225 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743244 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743263 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743291 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743321 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743339 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743358 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743418 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743445 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743471 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743501 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743530 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743550 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743569 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743589 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743648 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743674 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743699 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743724 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743755 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743780 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743810 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743897 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-hostroot\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743921 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06d899c2-5ac5-4760-b71a-06c970fdc9fc-proxy-tls\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743939 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743958 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-system-cni-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743988 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e54aabb4-c2b7-4000-927d-c71f81572645-serviceca\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744009 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-bin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744043 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744075 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744094 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744141 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744192 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e54aabb4-c2b7-4000-927d-c71f81572645-host\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744340 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744367 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744384 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744414 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-multus\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744437 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-tuning-conf-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744461 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-k8s-cni-cncf-io\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744487 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-daemon-config\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744508 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5l8q\" (UniqueName: \"kubernetes.io/projected/e54aabb4-c2b7-4000-927d-c71f81572645-kube-api-access-c5l8q\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744538 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744559 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-cni-binary-copy\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745276 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745336 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-os-release\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745447 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-os-release\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745511 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-conf-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745692 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745877 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745938 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746014 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746072 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/06d899c2-5ac5-4760-b71a-06c970fdc9fc-rootfs\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746110 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-multus-certs\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746151 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746198 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746224 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746473 4656 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.746996 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747044 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-cnibin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747073 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-netns\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747120 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tb2m\" (UniqueName: \"kubernetes.io/projected/06d899c2-5ac5-4760-b71a-06c970fdc9fc-kube-api-access-2tb2m\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747139 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmdq\" (UniqueName: \"kubernetes.io/projected/8f9a9023-4c07-4c93-b4d6-9034873ace37-kube-api-access-qcmdq\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747184 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747208 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-system-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747236 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747299 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747485 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748943 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749024 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749057 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749149 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-hosts-file\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749228 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4pbr\" (UniqueName: \"kubernetes.io/projected/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-kube-api-access-r4pbr\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749259 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-kubelet\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749308 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l84dh\" (UniqueName: \"kubernetes.io/projected/7662a84d-d9cb-4684-b76f-c63ffeff8344-kube-api-access-l84dh\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749339 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06d899c2-5ac5-4760-b71a-06c970fdc9fc-mcd-auth-proxy-config\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749387 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749451 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749485 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749516 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-cnibin\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749544 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-socket-dir-parent\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749617 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749724 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749772 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749801 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749824 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749846 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749908 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749935 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-etc-kubernetes\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749967 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.750003 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-binary-copy\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751261 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740696 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.740788 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741637 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.741698 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.777817 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.779107 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741871 4656 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.757925 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.781803 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.781895 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.784231 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.787913 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.788316 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.788469 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.788631 4656 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.789343 4656 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.789569 4656 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.789694 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.789829 4656 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790001 4656 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790139 4656 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790470 4656 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790592 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790723 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790830 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790948 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.790962 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791122 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791141 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791164 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791222 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791237 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791252 4656 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791281 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791293 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791306 4656 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791321 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791343 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791362 4656 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791373 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791383 4656 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791394 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791405 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791415 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791425 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791441 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791459 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791473 4656 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791486 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791500 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791513 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791526 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791536 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791545 4656 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791556 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791567 4656 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791577 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791587 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791598 4656 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791610 4656 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791619 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791629 4656 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791641 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791652 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.773945 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791827 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791844 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791855 4656 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.786453 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741900 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.783197 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741921 4656 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741944 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741963 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.791972 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792017 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792038 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792082 4656 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792097 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792109 4656 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792124 4656 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792163 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792193 4656 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792206 4656 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792218 4656 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792231 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792243 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792283 4656 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792300 4656 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792323 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792341 4656 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792353 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792363 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792373 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792382 4656 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792392 4656 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792402 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792413 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792423 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792434 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792443 4656 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792453 4656 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792466 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792476 4656 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792486 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792495 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792507 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792516 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792529 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792553 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792562 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792572 4656 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792582 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792610 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792619 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792641 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792654 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792665 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792681 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792705 4656 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792717 4656 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792729 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792741 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792757 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792770 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792785 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792796 4656 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792807 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.792821 4656 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761885 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741988 4656 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742010 4656 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742284 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742312 4656 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742332 4656 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742353 4656 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742377 4656 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742403 4656 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742427 4656 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742460 4656 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742488 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742472 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742514 4656 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.742539 4656 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.742923 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743254 4656 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743282 4656 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743302 4656 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743334 4656 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743356 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.794376 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.795933 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.743468 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743675 4656 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743698 4656 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743718 4656 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743739 4656 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743805 4656 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743935 4656 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743968 4656 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.743998 4656 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.744380 4656 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: very short watch: pkg/kubelet/config/apiserver.go:66: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.744640 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745212 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.745481 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.796446 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.296411632 +0000 UTC m=+41.804582616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.745741 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747361 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747437 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.747769 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748021 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748202 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748331 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748892 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.748950 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749030 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749045 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749375 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749495 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.749731 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.750308 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.750676 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751042 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751252 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751387 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.751947 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752196 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752156 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752424 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752533 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.752965 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.753077 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.753127 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799027 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.299005127 +0000 UTC m=+41.807175931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.753613 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.755814 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.755598 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.756063 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.756226 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.758239 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.758526 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.758747 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.759503 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.759747 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.759931 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.760443 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.760472 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761033 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761144 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761246 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.761740 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.768680 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.768843 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.772664 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.773211 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.777525 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: W0128 15:19:10.741827 4656 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.780197 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.780382 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.780412 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.781050 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.782096 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.782348 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.783011 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.785299 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.785685 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.786431 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799420 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799437 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799508 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.299499291 +0000 UTC m=+41.807670095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.786590 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799531 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799538 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:10 crc kubenswrapper[4656]: E0128 15:19:10.799563 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:11.299556933 +0000 UTC m=+41.807727737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.786592 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.786921 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.786984 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.787457 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.787690 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.787835 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.802819 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.805809 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.806376 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.809155 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.814905 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845175 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845231 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845247 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.845281 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.893890 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.893963 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.893991 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894017 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894057 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-hosts-file\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894084 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4pbr\" (UniqueName: \"kubernetes.io/projected/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-kube-api-access-r4pbr\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894109 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-kubelet\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894131 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l84dh\" (UniqueName: \"kubernetes.io/projected/7662a84d-d9cb-4684-b76f-c63ffeff8344-kube-api-access-l84dh\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894153 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06d899c2-5ac5-4760-b71a-06c970fdc9fc-mcd-auth-proxy-config\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894215 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894238 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894264 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-cnibin\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894290 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-socket-dir-parent\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894312 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894331 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894352 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894373 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894396 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894416 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894438 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894461 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-etc-kubernetes\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894484 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-binary-copy\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894506 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-hostroot\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894527 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06d899c2-5ac5-4760-b71a-06c970fdc9fc-proxy-tls\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894548 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894572 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-system-cni-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894593 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e54aabb4-c2b7-4000-927d-c71f81572645-serviceca\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894614 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-bin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894634 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894658 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894680 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894701 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e54aabb4-c2b7-4000-927d-c71f81572645-host\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894733 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894755 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894780 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-multus\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894801 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-tuning-conf-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894823 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-k8s-cni-cncf-io\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894844 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-daemon-config\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894864 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5l8q\" (UniqueName: \"kubernetes.io/projected/e54aabb4-c2b7-4000-927d-c71f81572645-kube-api-access-c5l8q\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894887 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894911 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-cni-binary-copy\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894933 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894953 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-os-release\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.894974 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-os-release\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895001 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-conf-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895118 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895144 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895195 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/06d899c2-5ac5-4760-b71a-06c970fdc9fc-rootfs\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895216 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-multus-certs\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.895265 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.896945 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-cnibin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897146 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-netns\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897195 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tb2m\" (UniqueName: \"kubernetes.io/projected/06d899c2-5ac5-4760-b71a-06c970fdc9fc-kube-api-access-2tb2m\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897208 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-socket-dir-parent\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897247 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897228 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmdq\" (UniqueName: \"kubernetes.io/projected/8f9a9023-4c07-4c93-b4d6-9034873ace37-kube-api-access-qcmdq\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897385 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-system-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.897759 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.898331 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-cni-binary-copy\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899450 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e54aabb4-c2b7-4000-927d-c71f81572645-host\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899492 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899494 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/06d899c2-5ac5-4760-b71a-06c970fdc9fc-rootfs\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899525 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-multus-certs\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899550 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899601 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899679 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-os-release\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899924 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899964 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.899996 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-etc-kubernetes\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900263 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900316 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-cnibin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900343 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-netns\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900678 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900721 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-kubelet\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900857 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-system-cni-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.900913 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.901219 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.901947 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.902065 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06d899c2-5ac5-4760-b71a-06c970fdc9fc-mcd-auth-proxy-config\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.902155 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.902329 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-os-release\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.902098 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-conf-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903329 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-cnibin\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903600 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903654 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-system-cni-dir\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903664 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903700 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903681 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903750 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-hosts-file\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903792 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.903813 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904262 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f9a9023-4c07-4c93-b4d6-9034873ace37-tuning-conf-dir\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904351 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904433 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-hostroot\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904644 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-run-k8s-cni-cncf-io\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904707 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-bin\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.904745 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7662a84d-d9cb-4684-b76f-c63ffeff8344-host-var-lib-cni-multus\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.905139 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7662a84d-d9cb-4684-b76f-c63ffeff8344-multus-daemon-config\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.905904 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-binary-copy\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906369 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906450 4656 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906499 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906514 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906535 4656 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906550 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906563 4656 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906578 4656 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906602 4656 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906617 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906631 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906647 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906667 4656 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906681 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906697 4656 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906713 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906757 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906771 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906784 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906801 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906815 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906827 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906839 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906856 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906869 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906882 4656 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906895 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906911 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906923 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906940 4656 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906958 4656 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.906996 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907009 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907022 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907039 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907052 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907065 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907077 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907095 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907108 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907123 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.907135 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.909924 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06d899c2-5ac5-4760-b71a-06c970fdc9fc-proxy-tls\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910219 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910278 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910311 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910326 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910352 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910368 4656 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910388 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910404 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910424 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910438 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910465 4656 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910484 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910499 4656 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910513 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910527 4656 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910546 4656 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910561 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910575 4656 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910693 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910712 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910730 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910760 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910777 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910798 4656 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910813 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910826 4656 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910847 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910862 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910876 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910891 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910909 4656 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910923 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910940 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910954 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910977 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.910994 4656 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.911008 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.911026 4656 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.911146 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.912747 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f9a9023-4c07-4c93-b4d6-9034873ace37-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.920106 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e54aabb4-c2b7-4000-927d-c71f81572645-serviceca\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.924153 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4pbr\" (UniqueName: \"kubernetes.io/projected/5994c1d0-57bd-4f0d-a63f-6e0f54746c3e-kube-api-access-r4pbr\") pod \"node-resolver-c695w\" (UID: \"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\") " pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.924547 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.926506 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tb2m\" (UniqueName: \"kubernetes.io/projected/06d899c2-5ac5-4760-b71a-06c970fdc9fc-kube-api-access-2tb2m\") pod \"machine-config-daemon-8llkk\" (UID: \"06d899c2-5ac5-4760-b71a-06c970fdc9fc\") " pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.926933 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l84dh\" (UniqueName: \"kubernetes.io/projected/7662a84d-d9cb-4684-b76f-c63ffeff8344-kube-api-access-l84dh\") pod \"multus-rpzjg\" (UID: \"7662a84d-d9cb-4684-b76f-c63ffeff8344\") " pod="openshift-multus/multus-rpzjg" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.929015 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5l8q\" (UniqueName: \"kubernetes.io/projected/e54aabb4-c2b7-4000-927d-c71f81572645-kube-api-access-c5l8q\") pod \"node-ca-55xm4\" (UID: \"e54aabb4-c2b7-4000-927d-c71f81572645\") " pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.929807 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmdq\" (UniqueName: \"kubernetes.io/projected/8f9a9023-4c07-4c93-b4d6-9034873ace37-kube-api-access-qcmdq\") pod \"multus-additional-cni-plugins-854tp\" (UID: \"8f9a9023-4c07-4c93-b4d6-9034873ace37\") " pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.929817 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") pod \"ovnkube-node-kwnzt\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.949988 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.950048 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.950061 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.950083 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.950098 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:10Z","lastTransitionTime":"2026-01-28T15:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:10 crc kubenswrapper[4656]: I0128 15:19:10.982834 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-854tp" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.005770 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.015735 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.262628 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.263260 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.263600 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.263572579 +0000 UTC m=+42.771743383 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.263971 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-55xm4" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.265511 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rpzjg" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.266026 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c695w" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.266231 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.266743 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268912 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268938 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268947 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268967 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.268981 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.271133 4656 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.273635 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.274823 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.275605 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.276923 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.277636 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.279573 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.280400 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.284740 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.285640 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.288004 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.288714 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.293268 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.294235 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.295342 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.296727 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.297481 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.302779 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.303299 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.304073 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.305452 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.306004 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.356272 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:16:16.238890098 +0000 UTC Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.365296 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.365352 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.365388 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.365415 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365560 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365650 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.365619848 +0000 UTC m=+42.873790652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365649 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365675 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365688 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365718 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365786 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365806 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365868 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365738 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.365729161 +0000 UTC m=+42.873899975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.365931 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.365903156 +0000 UTC m=+42.874074040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:11 crc kubenswrapper[4656]: E0128 15:19:11.366041 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:12.365946028 +0000 UTC m=+42.874116932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373211 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373227 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.373237 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.383091 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.383779 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.386947 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.387490 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.389106 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.390151 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.391305 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.392191 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.393504 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.394125 4656 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.394323 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.396952 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.397853 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.398346 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.400910 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.401692 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.402884 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.403679 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.404870 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.405571 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.407073 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.408010 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.409496 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.409960 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.411340 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.411870 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.413609 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.414263 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.415575 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.416118 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.416738 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.418015 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.418695 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.519839 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.521225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.521265 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.521297 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.521314 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.605259 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.615841 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822654 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822709 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822728 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.822774 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: W0128 15:19:11.823029 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode54aabb4_c2b7_4000_927d_c71f81572645.slice/crio-cc9d8cfbf446cbaf3e80c3cceb68a76c7505c7014eeff1e4d4ea235e5f070344 WatchSource:0}: Error finding container cc9d8cfbf446cbaf3e80c3cceb68a76c7505c7014eeff1e4d4ea235e5f070344: Status 404 returned error can't find the container with id cc9d8cfbf446cbaf3e80c3cceb68a76c7505c7014eeff1e4d4ea235e5f070344 Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.829398 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.829686 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.829866 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.830086 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.830276 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.830715 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.832023 4656 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.836771 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.837034 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.849121 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.868236 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.898808 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.912948 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.926815 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.929123 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930804 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930855 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.930866 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:11Z","lastTransitionTime":"2026-01-28T15:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.954868 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.955464 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.957524 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.976783 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.982995 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.983313 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.998620 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:11 crc kubenswrapper[4656]: I0128 15:19:11.998820 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.003684 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.013301 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e9759344c4c166b923c1743cb258cc7c03bb1ea2642fb069b4d977cb647473df"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.019733 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c695w" event={"ID":"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e","Type":"ContainerStarted","Data":"76a08d4387fa80a30cf597984eda9385818fdac0faecc9abbae2bb50e2f3bc11"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.021561 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"99f3f58926f9f5145244dbe6e9acfd081f57a6d5e67d0fa71fb1124101e0bee2"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.026397 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.035714 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.041483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.041732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.041991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.042105 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.042123 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.043458 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.047564 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"734b0af208a3f29c981557a5245daa4fc21a51ab4cf5fa8e015542be87200193"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.048020 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.054625 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"caa6333e4a167c18cc52a674e48e299d4c281867ef0c741bf7e1ba8c6827b13b"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.054820 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.057999 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.075037 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.075545 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.075939 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e36be3e25a4ba99df8f847b1f2ae59ff2459bed7baaa6170a8facef2f0bccccf"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.084237 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-55xm4" event={"ID":"e54aabb4-c2b7-4000-927d-c71f81572645","Type":"ContainerStarted","Data":"cc9d8cfbf446cbaf3e80c3cceb68a76c7505c7014eeff1e4d4ea235e5f070344"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.088639 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"b1f1c83f234543b1908577b7478154042fb10bfd05989386ce47719a52921c15"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.091050 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"4cef9cbb1b627ac09954266d05140f1667515bb328c389ef233a1a83526f93ca"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.091493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.107368 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.110357 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.125608 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.142529 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.145860 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148918 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148975 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.148987 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.161999 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.170243 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.170772 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.170999 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.171225 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.171470 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.171554 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.173201 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.181794 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.205188 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.221553 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.224376 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.236034 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.244318 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.258712 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.260587 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.271370 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.274033 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.287999 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.301688 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.315093 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.316725 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.318983 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.330725 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.330993 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.330952355 +0000 UTC m=+44.839123159 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.331064 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.348320 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.357363 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:38:43.760629462 +0000 UTC Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.371048 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.390939 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395659 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395701 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395731 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.395740 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.464101 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.464158 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.464214 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.464251 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464430 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464459 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464473 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464547 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.464530332 +0000 UTC m=+44.972701136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464939 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.464970 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.464960615 +0000 UTC m=+44.973131419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465043 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465060 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465070 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465099 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.465090068 +0000 UTC m=+44.973260872 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465191 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: E0128 15:19:12.465229 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:14.465220002 +0000 UTC m=+44.973390806 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528629 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.528646 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668073 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668089 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.668100 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.814976 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.815588 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.815603 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.815623 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.815633 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947619 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947685 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947696 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947722 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:12 crc kubenswrapper[4656]: I0128 15:19:12.947735 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:12Z","lastTransitionTime":"2026-01-28T15:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052205 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052217 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052238 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.052250 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.097821 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.102010 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.102038 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.107417 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-55xm4" event={"ID":"e54aabb4-c2b7-4000-927d-c71f81572645","Type":"ContainerStarted","Data":"1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.109633 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c695w" event={"ID":"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e","Type":"ContainerStarted","Data":"e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.111047 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.116303 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" exitCode=0 Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.116534 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.118649 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.121163 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.122046 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.200832 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202711 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202758 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202773 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.202811 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.304896 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.305254 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.305269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.305289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.305304 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.318239 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.333615 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.344665 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.358819 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:43:56.343253021 +0000 UTC Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.361206 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.372616 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.390870 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408935 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408963 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408985 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.408999 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.414828 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.431207 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.451807 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.471982 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.530833 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.531955 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.531977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.531986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.532006 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.532016 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.552411 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.573124 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636724 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636765 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636776 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636792 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.636803 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.637590 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.647156 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.665814 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.678667 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739736 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739795 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739819 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.739832 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.740926 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888032 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888070 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888082 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.888126 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.889147 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.908370 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.953239 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.965659 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991256 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991470 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:13 crc kubenswrapper[4656]: I0128 15:19:13.991553 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:13Z","lastTransitionTime":"2026-01-28T15:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094340 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094368 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094377 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.094409 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.137076 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.137177 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.170898 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.170937 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.171018 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.171235 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.171552 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.171836 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.196970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.197426 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.197558 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.197654 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.197741 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.299999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.300059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.300094 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.300113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.300126 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.405139 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:25:53.766067188 +0000 UTC Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.405309 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.405596 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.405567193 +0000 UTC m=+48.913737997 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407309 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407481 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.407567 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.519605 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.519687 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.519725 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.519751 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.519936 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.519964 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.519961 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520000 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520016 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.519981 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520019 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520105 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.520072191 +0000 UTC m=+49.028243165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520133 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.520122242 +0000 UTC m=+49.028293256 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520154 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.520145073 +0000 UTC m=+49.028316087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520260 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: E0128 15:19:14.520315 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.520289587 +0000 UTC m=+49.028460391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523528 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523570 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523582 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.523621 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.636112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.637126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.637259 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.637453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.637608 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795182 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795219 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795229 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795258 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.795268 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897099 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897151 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897187 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897208 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:14 crc kubenswrapper[4656]: I0128 15:19:14.897219 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:14Z","lastTransitionTime":"2026-01-28T15:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004707 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004782 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004812 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.004825 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108793 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108861 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108873 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108918 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.108933 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.142916 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69" exitCode=0 Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.142978 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.147071 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.147265 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.147350 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.155653 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.166818 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.211193 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.226345 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277266 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277299 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277327 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.277336 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.328590 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.347880 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.374696 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387512 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387563 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.387594 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.392061 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.406312 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:07:35.114964891 +0000 UTC Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.408565 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.427234 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.444873 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.458189 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.487729 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510344 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510376 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510415 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.510501 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.525022 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.544268 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.581217 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.598898 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.618915 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.631423 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673848 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673897 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673909 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673929 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.673942 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.679022 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.780568 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.782899 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.782951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.782971 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.782998 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.783024 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.805227 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.821859 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.837033 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885359 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885403 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885412 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:15 crc kubenswrapper[4656]: I0128 15:19:15.885439 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:15Z","lastTransitionTime":"2026-01-28T15:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025422 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.025464 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128815 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128843 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.128853 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.170997 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.171018 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.171051 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:16 crc kubenswrapper[4656]: E0128 15:19:16.171262 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:16 crc kubenswrapper[4656]: E0128 15:19:16.171401 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:16 crc kubenswrapper[4656]: E0128 15:19:16.171745 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.200365 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.202974 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.219500 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.230999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.231245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.231321 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.231385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.231442 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.236437 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.304387 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.316320 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.331914 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.333961 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.334132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.334250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.334360 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.334458 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.366200 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.380832 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.407075 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 07:09:27.632144362 +0000 UTC Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438737 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438855 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438902 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.438942 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542069 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542116 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542131 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.542194 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.560468 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.606492 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.724514 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783698 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783738 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783748 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783765 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.783776 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.797832 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.815990 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.886649 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.887017 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.887100 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.887194 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:16 crc kubenswrapper[4656]: I0128 15:19:16.887270 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:16Z","lastTransitionTime":"2026-01-28T15:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004141 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004191 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004204 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004221 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.004233 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107310 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.107320 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344523 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344583 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.344595 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.407926 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:08:17.011675535 +0000 UTC Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454632 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.454641 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.610826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.611183 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.611196 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.611213 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.611224 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714211 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714249 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714260 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.714291 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817413 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817523 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817540 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817560 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.817571 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921105 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921188 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921205 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921237 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:17 crc kubenswrapper[4656]: I0128 15:19:17.921256 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:17Z","lastTransitionTime":"2026-01-28T15:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.024954 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.025002 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.025011 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.025027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.025036 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.127970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.128020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.128040 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.128064 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.128081 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.348818 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.349026 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.349552 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.349648 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.349717 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.349779 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352413 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352489 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352506 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.352549 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.359718 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6" exitCode=0 Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.359768 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.377453 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.438540 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:00:26.122520887 +0000 UTC Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612194 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612391 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612456 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612523 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.612563 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.612690 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.612661727 +0000 UTC m=+57.120832561 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.612836 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.612894 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.612880673 +0000 UTC m=+57.121051507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613389 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613502 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.61346868 +0000 UTC m=+57.121639674 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613522 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613547 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613566 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613567 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613583 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613596 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613614 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.613598224 +0000 UTC m=+57.121769058 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:18 crc kubenswrapper[4656]: E0128 15:19:18.613644 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:26.613629214 +0000 UTC m=+57.121800058 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617413 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617461 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.617526 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.630774 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.708360 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797910 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797944 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797958 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:18 crc kubenswrapper[4656]: I0128 15:19:18.797967 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:18Z","lastTransitionTime":"2026-01-28T15:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018830 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018847 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.018861 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.053862 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121480 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121516 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.121540 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252555 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252587 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252596 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.252617 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.256379 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.279738 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.302955 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.321493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.336648 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.431517 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435814 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435847 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435859 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435875 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.435887 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.437833 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d" exitCode=0 Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.437894 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.444014 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.469118 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:42:52.214739941 +0000 UTC Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.484941 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.668338 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671310 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671368 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.671399 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.704555 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.742434 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.761069 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.781008 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.819753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.819813 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.819824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.819849 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.820296 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.823802 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.889531 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.911807 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931909 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931947 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931958 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931975 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.931990 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:19Z","lastTransitionTime":"2026-01-28T15:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.932718 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.953649 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.972636 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:19 crc kubenswrapper[4656]: I0128 15:19:19.988010 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.005371 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.235488 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.235541 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.235621 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.235688 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.235796 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.235908 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.238933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.238985 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.239006 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.239026 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.239039 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240633 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240690 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240756 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.240768 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.260102 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.261350 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272452 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.272803 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.302478 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312834 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312915 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.312933 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.353773 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363206 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363276 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363329 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.363354 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.391525 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396896 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.396907 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.417121 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: E0128 15:19:20.417405 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419792 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419821 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419829 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419844 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.419852 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.452937 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f" exitCode=0 Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.453382 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.469822 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:54:17.063899497 +0000 UTC Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.480359 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.503680 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.522294 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.523280 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.523982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.524020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.524043 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.524072 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.542261 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.569687 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.585893 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.602525 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.615561 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626723 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626758 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626773 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.626782 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.632698 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.653898 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.695360 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.723360 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730389 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730430 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730442 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730461 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.730473 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.740209 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833506 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833546 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833563 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.833608 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936464 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936501 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936526 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:20 crc kubenswrapper[4656]: I0128 15:19:20.936536 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:20Z","lastTransitionTime":"2026-01-28T15:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.038606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.038920 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.038990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.039054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.039114 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141319 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141371 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.141404 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.188624 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.214501 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.237493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.254668 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.271301 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297389 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297434 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.297446 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.304658 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.320285 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.365473 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.384963 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400475 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.400521 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.402446 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.418433 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.684398 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.685426 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:14:53.40887575 +0000 UTC Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.700505 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720256 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720667 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.720728 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:21Z","lastTransitionTime":"2026-01-28T15:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.741510 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.744328 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.744690 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.758138 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.767363 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.776640 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82"} Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.778999 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.779133 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.779194 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.781484 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:21 crc kubenswrapper[4656]: I0128 15:19:21.799667 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.001953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.001982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.001991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.002035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.002052 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.016628 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.057781 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.062024 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.076889 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105426 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105444 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.105457 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.156962 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.165920 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.170496 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:22 crc kubenswrapper[4656]: E0128 15:19:22.170657 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.171119 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:22 crc kubenswrapper[4656]: E0128 15:19:22.171236 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.171306 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:22 crc kubenswrapper[4656]: E0128 15:19:22.171353 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.190890 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.208878 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.208962 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.209005 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.209027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.209039 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.211966 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.228722 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.245643 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311625 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311633 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311647 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.311657 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.322827 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.337008 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.354234 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.374010 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523929 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523959 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.523971 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.528872 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.545056 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.660774 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.661972 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.661991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.661999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.662012 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.662021 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.685029 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.685944 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:26:48.17449602 +0000 UTC Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.699685 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.714406 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.726682 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.752227 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764362 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764397 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764411 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.764422 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.785526 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.818793 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.841887 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891807 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891846 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.891858 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994461 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994499 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994535 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:22 crc kubenswrapper[4656]: I0128 15:19:22.994549 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:22Z","lastTransitionTime":"2026-01-28T15:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098032 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098071 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098085 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.098095 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201194 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.201257 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371484 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371526 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371539 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371558 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.371572 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.686532 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:09:40.109157128 +0000 UTC Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734496 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734521 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.734533 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838521 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838567 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838582 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.838618 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.964265 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.964502 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.964762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.964966 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:23 crc kubenswrapper[4656]: I0128 15:19:23.965049 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:23Z","lastTransitionTime":"2026-01-28T15:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068213 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068263 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.068294 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.170736 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:24 crc kubenswrapper[4656]: E0128 15:19:24.171069 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.171439 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.171544 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:24 crc kubenswrapper[4656]: E0128 15:19:24.171723 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:24 crc kubenswrapper[4656]: E0128 15:19:24.171881 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173365 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173445 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.173488 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293363 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293371 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293387 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.293400 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395378 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395447 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.395480 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497592 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497622 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497630 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497644 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.497653 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600437 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600494 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.600545 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.687032 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:06:45.889028746 +0000 UTC Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703883 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703926 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.703989 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.793488 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516" exitCode=0 Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.793545 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806534 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806612 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806690 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.806759 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.819249 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.838828 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.851662 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.869454 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918451 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918504 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918545 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.918555 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:24Z","lastTransitionTime":"2026-01-28T15:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.925800 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.946551 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.963485 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:24 crc kubenswrapper[4656]: I0128 15:19:24.975791 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:24.991615 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.006911 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.026134 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.045655 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.069564 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169684 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169721 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169771 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.169783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272231 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272305 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.272356 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376204 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376265 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376305 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.376321 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479374 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479407 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479418 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479435 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.479446 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581909 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581947 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581958 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581974 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.581984 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684911 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684921 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684938 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.684949 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.687635 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:09:43.419383191 +0000 UTC Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788409 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788456 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788467 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.788499 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.805271 4656 generic.go:334] "Generic (PLEG): container finished" podID="8f9a9023-4c07-4c93-b4d6-9034873ace37" containerID="34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6" exitCode=0 Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.805357 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerDied","Data":"34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.850472 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.867987 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q"] Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.868918 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.871575 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.873742 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.873802 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7112154f-4499-48ec-9135-6f4a26eca33a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.873856 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.873896 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mq85\" (UniqueName: \"kubernetes.io/projected/7112154f-4499-48ec-9135-6f4a26eca33a-kube-api-access-2mq85\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.874933 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.877609 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.902796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.903312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.903449 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.903550 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.903741 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:25Z","lastTransitionTime":"2026-01-28T15:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.904690 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.918638 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.938987 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.955914 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.970560 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.974986 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7112154f-4499-48ec-9135-6f4a26eca33a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.975151 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.975275 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mq85\" (UniqueName: \"kubernetes.io/projected/7112154f-4499-48ec-9135-6f4a26eca33a-kube-api-access-2mq85\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.975392 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.976068 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.976618 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7112154f-4499-48ec-9135-6f4a26eca33a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:25 crc kubenswrapper[4656]: I0128 15:19:25.985191 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7112154f-4499-48ec-9135-6f4a26eca33a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.004700 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006407 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006421 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006441 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.006453 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.009344 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mq85\" (UniqueName: \"kubernetes.io/projected/7112154f-4499-48ec-9135-6f4a26eca33a-kube-api-access-2mq85\") pod \"ovnkube-control-plane-749d76644c-b6g2q\" (UID: \"7112154f-4499-48ec-9135-6f4a26eca33a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.022060 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.038465 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.054408 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.069019 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.084185 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.101304 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118563 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118617 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118642 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.118652 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.127940 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.139270 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.152599 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.167453 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.169698 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.169876 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.169990 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.170057 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.170095 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.170135 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.184846 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.186988 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.210299 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: W0128 15:19:26.210282 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7112154f_4499_48ec_9135_6f4a26eca33a.slice/crio-61dafdc870de9a07c8341c1b0a9e5097abc71526c434f6f7e08d951f55f35521 WatchSource:0}: Error finding container 61dafdc870de9a07c8341c1b0a9e5097abc71526c434f6f7e08d951f55f35521: Status 404 returned error can't find the container with id 61dafdc870de9a07c8341c1b0a9e5097abc71526c434f6f7e08d951f55f35521 Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222374 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222383 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.222624 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.225202 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.242615 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.259935 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.273778 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.290296 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.315848 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326140 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.326151 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.328644 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428723 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.428808 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530916 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530929 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530946 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.530957 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635267 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635331 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.635420 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833284 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:32:15.049657887 +0000 UTC Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833697 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833816 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833847 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833876 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.833899 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834066 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834023116 +0000 UTC m=+73.342193920 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834105 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834122 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834135 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834152 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834190 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834204 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834207 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834194171 +0000 UTC m=+73.342364975 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834235 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834243 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834230942 +0000 UTC m=+73.342401956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834284 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834276234 +0000 UTC m=+73.342447038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834309 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: E0128 15:19:26.834348 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.834337015 +0000 UTC m=+73.342508029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836428 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.836441 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.856523 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" event={"ID":"8f9a9023-4c07-4c93-b4d6-9034873ace37","Type":"ContainerStarted","Data":"8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.866847 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" event={"ID":"7112154f-4499-48ec-9135-6f4a26eca33a","Type":"ContainerStarted","Data":"c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.866919 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" event={"ID":"7112154f-4499-48ec-9135-6f4a26eca33a","Type":"ContainerStarted","Data":"544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.866931 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" event={"ID":"7112154f-4499-48ec-9135-6f4a26eca33a","Type":"ContainerStarted","Data":"61dafdc870de9a07c8341c1b0a9e5097abc71526c434f6f7e08d951f55f35521"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.880684 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.904023 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.919513 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.938589 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939330 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.939340 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:26Z","lastTransitionTime":"2026-01-28T15:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.964654 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:26 crc kubenswrapper[4656]: I0128 15:19:26.984046 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.018710 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.042638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.042969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.043084 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.043225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.043301 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.056427 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.073065 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.091787 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.165975 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.257803 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.257864 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.257891 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.257930 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.258154 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.351919 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364757 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364790 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364805 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364825 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.364846 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.369732 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bmj6r"] Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.370526 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: E0128 15:19:27.370615 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.378653 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.395797 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.420295 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.437573 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.453461 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468301 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468351 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468369 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468411 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.468433 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.476155 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.500470 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.525330 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.548894 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572598 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572676 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572707 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572738 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.572751 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.573823 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.573866 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhrdd\" (UniqueName: \"kubernetes.io/projected/11320542-8463-40db-8981-632be2bd5a48-kube-api-access-rhrdd\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.577213 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.610051 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.625351 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.695873 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.723386 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.820639 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.843051 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.859703 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:27Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.972647 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:02:21.89988513 +0000 UTC Jan 28 15:19:27 crc kubenswrapper[4656]: E0128 15:19:27.974047 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:27 crc kubenswrapper[4656]: E0128 15:19:27.974144 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:28.47412334 +0000 UTC m=+58.982294144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.974647 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.974711 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhrdd\" (UniqueName: \"kubernetes.io/projected/11320542-8463-40db-8981-632be2bd5a48-kube-api-access-rhrdd\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.977885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.977923 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.977933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.977953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:27 crc kubenswrapper[4656]: I0128 15:19:27.978001 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:27Z","lastTransitionTime":"2026-01-28T15:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.005270 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhrdd\" (UniqueName: \"kubernetes.io/projected/11320542-8463-40db-8981-632be2bd5a48-kube-api-access-rhrdd\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.082377 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.082701 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.082817 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.082926 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.083040 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.170026 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.170139 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.170280 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.170719 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.170798 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.170874 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223307 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223351 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223361 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223382 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.223394 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409804 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409843 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409857 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409874 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.409883 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.504754 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.504963 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:28 crc kubenswrapper[4656]: E0128 15:19:28.505027 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:29.505009032 +0000 UTC m=+60.013179836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.513326 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.615676 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.616192 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.616261 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.616324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.616382 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720626 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720738 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.720755 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861175 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861214 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861224 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861242 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.861262 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.868012 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.883534 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.899133 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.919534 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.944712 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965063 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965083 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.965097 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:28Z","lastTransitionTime":"2026-01-28T15:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.973811 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:01:38.659119029 +0000 UTC Jan 28 15:19:28 crc kubenswrapper[4656]: I0128 15:19:28.978352 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.003833 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.026430 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.046513 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.067100 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.069618 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.069750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.070057 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.070369 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.070577 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.088301 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.104863 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.135493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.168842 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.170843 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:29 crc kubenswrapper[4656]: E0128 15:19:29.171101 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174184 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174201 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174216 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.174256 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.187686 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.205607 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.260760 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.277916 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.277972 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.277993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.278017 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.278031 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381125 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381499 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381568 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381637 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.381697 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484726 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484776 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484792 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.484802 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.516076 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:29 crc kubenswrapper[4656]: E0128 15:19:29.516546 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:29 crc kubenswrapper[4656]: E0128 15:19:29.516875 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:31.516828967 +0000 UTC m=+62.024999771 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.588094 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.588749 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.589001 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.589119 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.589236 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.692042 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.693123 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.693218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.693293 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.693418 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.796949 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.797453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.797617 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.797704 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.797786 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.899987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.900028 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.900039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.900058 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.900070 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:29Z","lastTransitionTime":"2026-01-28T15:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.974293 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:57:14.573005916 +0000 UTC Jan 28 15:19:29 crc kubenswrapper[4656]: I0128 15:19:29.997625 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/0.log" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002375 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002477 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002093 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.002093 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82" exitCode=1 Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.003427 4656 scope.go:117] "RemoveContainer" containerID="ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.089086 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106233 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106275 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106302 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.106316 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.108586 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.125185 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.138395 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.153260 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.170499 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.170499 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.170873 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.170999 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.171190 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.171247 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.179864 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:19:29.141399 5844 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:29.141521 5844 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:29.141539 5844 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:29.141554 5844 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:29.141562 5844 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:29.141589 5844 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:29.141597 5844 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:29.141605 5844 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:29.141627 5844 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:29.141655 5844 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:19:29.141673 5844 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:19:29.141699 5844 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:29.141704 5844 factory.go:656] Stopping watch factory\\\\nI0128 15:19:29.141722 5844 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:19:29.141749 5844 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.194368 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213443 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213524 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213540 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.213580 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.217330 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.233422 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.245873 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.258832 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.274559 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.289539 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.304838 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317138 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317191 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317217 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317229 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.317994 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.333067 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420574 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420631 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420667 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.420681 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526426 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526490 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.526502 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629874 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629915 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629930 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.629940 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658412 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658425 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658446 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.658460 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.675751 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681408 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681473 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681507 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.681521 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.696716 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702198 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.702310 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.716944 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722216 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722239 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722264 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.722278 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.791831 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796810 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796864 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.796894 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.817757 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:30 crc kubenswrapper[4656]: E0128 15:19:30.817940 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819696 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819725 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819736 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.819766 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922233 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922264 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.922295 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:30Z","lastTransitionTime":"2026-01-28T15:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:30 crc kubenswrapper[4656]: I0128 15:19:30.975019 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:45:06.046163316 +0000 UTC Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.009720 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/0.log" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.012581 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.013196 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.170076 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:31 crc kubenswrapper[4656]: E0128 15:19:31.172261 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181331 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181366 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.181492 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.380317 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391839 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391927 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391946 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.391960 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.407457 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.439446 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:19:29.141399 5844 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:29.141521 5844 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:29.141539 5844 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:29.141554 5844 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:29.141562 5844 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:29.141589 5844 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:29.141597 5844 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:29.141605 5844 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:29.141627 5844 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:29.141655 5844 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:19:29.141673 5844 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:19:29.141699 5844 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:29.141704 5844 factory.go:656] Stopping watch factory\\\\nI0128 15:19:29.141722 5844 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:19:29.141749 5844 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.454684 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.472758 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.495974 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.496030 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.496045 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.496065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.496082 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.506421 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.526884 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.543577 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.563921 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.582100 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:31 crc kubenswrapper[4656]: E0128 15:19:31.582466 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:31 crc kubenswrapper[4656]: E0128 15:19:31.582590 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:35.582555759 +0000 UTC m=+66.090726563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.590136 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598901 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598928 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.598938 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.608350 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.624587 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.646995 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.661412 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767721 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767752 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.767769 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.773457 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.791414 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.811667 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.831870 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.848096 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.861430 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877598 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877677 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877694 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.877761 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.882981 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.898540 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.917495 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.938558 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.957481 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.975803 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:17:41.628824194 +0000 UTC Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.976338 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981449 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981497 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981511 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981529 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.981543 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:31Z","lastTransitionTime":"2026-01-28T15:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:31 crc kubenswrapper[4656]: I0128 15:19:31.995102 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.014766 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.018322 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/1.log" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.019139 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/0.log" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.022310 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" exitCode=1 Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.022350 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.022455 4656 scope.go:117] "RemoveContainer" containerID="ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.023442 4656 scope.go:117] "RemoveContainer" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" Jan 28 15:19:32 crc kubenswrapper[4656]: E0128 15:19:32.023646 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.045470 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.068493 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085326 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085627 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085644 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.085654 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.090674 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:19:29.141399 5844 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:29.141521 5844 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:29.141539 5844 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:29.141554 5844 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:29.141562 5844 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:29.141589 5844 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:29.141597 5844 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:29.141605 5844 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:29.141627 5844 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:29.141655 5844 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:19:29.141673 5844 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:19:29.141699 5844 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:29.141704 5844 factory.go:656] Stopping watch factory\\\\nI0128 15:19:29.141722 5844 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:19:29.141749 5844 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.104187 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.118065 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.132275 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.149048 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.161938 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.170075 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:32 crc kubenswrapper[4656]: E0128 15:19:32.170287 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.170284 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:32 crc kubenswrapper[4656]: E0128 15:19:32.170397 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.170320 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:32 crc kubenswrapper[4656]: E0128 15:19:32.170511 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.178729 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187582 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187601 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.187615 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.206495 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca39659ed12ef871388392c8ef962ae538ac622daf7e33526b5a804d68c24a82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:29Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:19:29.141399 5844 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:29.141521 5844 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:29.141539 5844 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:29.141554 5844 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:29.141562 5844 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:29.141589 5844 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:29.141597 5844 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:29.141605 5844 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:29.141627 5844 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:29.141655 5844 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:19:29.141673 5844 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:19:29.141699 5844 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:29.141704 5844 factory.go:656] Stopping watch factory\\\\nI0128 15:19:29.141722 5844 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:19:29.141749 5844 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.220863 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.238453 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.258367 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.275601 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.289809 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290623 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290668 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290679 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290715 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.290725 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.306304 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.321852 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.339885 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.358951 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.374216 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392721 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.392757 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.495605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.495862 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.495936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.496031 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.496134 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598745 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.598926 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702283 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702339 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702378 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.702393 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.804941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.804969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.804976 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.804990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.805000 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908521 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908628 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908674 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.908759 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:32Z","lastTransitionTime":"2026-01-28T15:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:32 crc kubenswrapper[4656]: I0128 15:19:32.976482 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:47:14.236610142 +0000 UTC Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012125 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012206 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012241 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.012256 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.038221 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/1.log" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.044675 4656 scope.go:117] "RemoveContainer" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" Jan 28 15:19:33 crc kubenswrapper[4656]: E0128 15:19:33.044925 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.064570 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.090734 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.104057 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114909 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.114920 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.118266 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.141355 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.153244 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.168867 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.169781 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:33 crc kubenswrapper[4656]: E0128 15:19:33.169982 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.184417 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.198234 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.211482 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216924 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216942 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216961 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.216975 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.225133 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.238906 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.254704 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.330939 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332443 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332517 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.332550 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.347538 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.357887 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.435970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.436016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.436029 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.436047 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.436073 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538591 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538658 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538736 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.538803 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.641948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.642001 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.642011 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.642064 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.642082 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745242 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745299 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745335 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.745348 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.847886 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.848393 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.848488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.848581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.848695 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.950607 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.950911 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.951000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.951102 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.951194 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:33Z","lastTransitionTime":"2026-01-28T15:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:33 crc kubenswrapper[4656]: I0128 15:19:33.976975 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:28:01.638590261 +0000 UTC Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.053871 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.054219 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.054348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.054486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.054589 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157577 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157598 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.157614 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.170202 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:34 crc kubenswrapper[4656]: E0128 15:19:34.170352 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.170759 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:34 crc kubenswrapper[4656]: E0128 15:19:34.170816 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.170959 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:34 crc kubenswrapper[4656]: E0128 15:19:34.171177 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.259936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.259986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.259995 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.260009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.260019 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363368 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363411 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363427 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.363463 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470374 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470756 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470842 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.470969 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.573666 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.573887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.573983 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.574068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.574129 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677232 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677596 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677694 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677833 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.677938 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781214 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781273 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.781315 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.883991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.884035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.884083 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.884105 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.884117 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.977846 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:45:23.047760213 +0000 UTC Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987131 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:34 crc kubenswrapper[4656]: I0128 15:19:34.987144 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:34Z","lastTransitionTime":"2026-01-28T15:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090248 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090304 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.090314 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.172286 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:35 crc kubenswrapper[4656]: E0128 15:19:35.172497 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193647 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193701 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.193764 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297026 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.297089 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400331 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.400343 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502592 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502623 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502653 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502674 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.502686 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606643 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606664 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.606704 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.658017 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:35 crc kubenswrapper[4656]: E0128 15:19:35.658216 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:35 crc kubenswrapper[4656]: E0128 15:19:35.658320 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:43.658294099 +0000 UTC m=+74.166464903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709757 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709847 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.709901 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812712 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812764 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812776 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812819 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.812837 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916148 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916226 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916237 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916261 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.916276 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:35Z","lastTransitionTime":"2026-01-28T15:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:35 crc kubenswrapper[4656]: I0128 15:19:35.978564 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:14:21.094545982 +0000 UTC Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020570 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020585 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020603 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.020615 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123364 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123375 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.123426 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.169974 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:36 crc kubenswrapper[4656]: E0128 15:19:36.170136 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.170392 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:36 crc kubenswrapper[4656]: E0128 15:19:36.170452 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.170794 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:36 crc kubenswrapper[4656]: E0128 15:19:36.170844 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226377 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.226411 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329121 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329158 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329181 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.329228 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432439 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432474 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.432487 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.535956 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.536009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.536021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.536043 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.536082 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638775 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638814 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638844 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.638855 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742103 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742183 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742201 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742226 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.742240 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845207 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845260 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845273 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.845304 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948435 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.948446 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:36Z","lastTransitionTime":"2026-01-28T15:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:36 crc kubenswrapper[4656]: I0128 15:19:36.979037 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:35:58.090592785 +0000 UTC Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051494 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051626 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051643 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051662 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.051675 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154444 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154455 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154473 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.154495 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.170184 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:37 crc kubenswrapper[4656]: E0128 15:19:37.170375 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.257993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.258043 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.258056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.258073 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.258089 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360858 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.360892 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463450 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463560 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.463572 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566103 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566146 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566191 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.566201 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668859 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668871 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668891 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.668905 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772013 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772090 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772106 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.772141 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874651 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.874701 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978190 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978230 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978241 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978263 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.978283 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:37Z","lastTransitionTime":"2026-01-28T15:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:37 crc kubenswrapper[4656]: I0128 15:19:37.979736 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:24:20.300539262 +0000 UTC Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.080977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.081020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.081033 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.081053 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.081067 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.170285 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.170347 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.170400 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:38 crc kubenswrapper[4656]: E0128 15:19:38.170435 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:38 crc kubenswrapper[4656]: E0128 15:19:38.170516 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:38 crc kubenswrapper[4656]: E0128 15:19:38.170611 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184334 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184346 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184370 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.184384 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287106 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287146 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.287173 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.389974 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.390028 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.390041 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.390061 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.390073 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493188 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.493235 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595850 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595897 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595910 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.595948 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698100 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698134 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.698146 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.726579 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.745224 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.759846 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.774002 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.788765 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.800426 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802404 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802432 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.802468 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.815527 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.836135 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.855152 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.873633 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.889883 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904565 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904621 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.904631 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:38Z","lastTransitionTime":"2026-01-28T15:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.907140 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.920563 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.934197 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.948276 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.968643 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.980532 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:25:59.275136153 +0000 UTC Jan 28 15:19:38 crc kubenswrapper[4656]: I0128 15:19:38.984788 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007359 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007395 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007432 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.007446 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110232 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110283 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110317 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.110329 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.169934 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:39 crc kubenswrapper[4656]: E0128 15:19:39.170134 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213435 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213482 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213493 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213522 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.213534 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316519 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316589 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.316605 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420322 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420332 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420352 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.420363 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523441 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523470 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523480 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523495 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.523504 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.626318 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729106 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729115 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729131 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.729141 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832301 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832332 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.832345 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935077 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935128 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935150 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.935190 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:39Z","lastTransitionTime":"2026-01-28T15:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:39 crc kubenswrapper[4656]: I0128 15:19:39.981523 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:10:20.309217243 +0000 UTC Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040116 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040147 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.040179 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143394 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143451 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143471 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.143483 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.170587 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.170656 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.170734 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:40 crc kubenswrapper[4656]: E0128 15:19:40.170774 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:40 crc kubenswrapper[4656]: E0128 15:19:40.170948 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:40 crc kubenswrapper[4656]: E0128 15:19:40.171035 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245733 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245857 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245877 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.245888 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349175 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349228 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349241 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349267 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.349280 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451741 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451752 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451770 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.451780 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553656 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553699 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553715 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.553726 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656205 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656242 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.656288 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758780 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758821 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758832 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.758862 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861806 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861849 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861861 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861888 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.861901 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965128 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965146 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.965180 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:40Z","lastTransitionTime":"2026-01-28T15:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:40 crc kubenswrapper[4656]: I0128 15:19:40.982475 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:48:30.293803464 +0000 UTC Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002306 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.002338 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.016720 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022404 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.022452 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.040995 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062028 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062087 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062100 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062122 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.062136 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.079589 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084612 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084650 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084662 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.084694 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.101021 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105024 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105064 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105082 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105105 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.105119 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.119347 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.119514 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121412 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121448 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121459 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121474 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.121484 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.169929 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:41 crc kubenswrapper[4656]: E0128 15:19:41.170097 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.205445 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.218885 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223609 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223636 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223646 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223660 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.223670 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.244487 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.258866 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.274070 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.291881 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.306972 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.323915 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328335 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328352 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.328365 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.338566 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.351115 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.367974 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.380330 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.395607 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.411961 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430763 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430787 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.430797 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.431727 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.446035 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:41Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.532968 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.533284 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.533300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.533317 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.533328 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635398 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635426 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635437 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.635462 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738337 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.738367 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840244 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840613 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840710 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.840797 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943033 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.943143 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:41Z","lastTransitionTime":"2026-01-28T15:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:41 crc kubenswrapper[4656]: I0128 15:19:41.982607 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:31:25.129144662 +0000 UTC Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045878 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.045909 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.148960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.149019 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.149036 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.149059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.149071 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.170402 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.170630 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.171065 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.172190 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.172478 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.172883 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.251473 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.251812 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.251907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.252004 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.252092 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.355575 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.355879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.356030 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.356122 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.356232 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.458991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.459298 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.459385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.459519 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.459602 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563023 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563067 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563101 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.563113 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665797 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665837 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665849 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665867 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.665879 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770531 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770580 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770637 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.770647 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.837651 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.837794 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.837835 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.837875 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.837977 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.837937675 +0000 UTC m=+105.346108499 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838058 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838106 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838106 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838127 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838137 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838151 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838194 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.838182832 +0000 UTC m=+105.346353806 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838215 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.838204753 +0000 UTC m=+105.346375737 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838153 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838236 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838254 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.838247334 +0000 UTC m=+105.346418138 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.838077 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:42 crc kubenswrapper[4656]: E0128 15:19:42.838300 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:14.838280985 +0000 UTC m=+105.346451969 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873497 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873541 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873550 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873568 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.873581 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.975967 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.975997 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.976035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.976051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.976060 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:42Z","lastTransitionTime":"2026-01-28T15:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:42 crc kubenswrapper[4656]: I0128 15:19:42.983664 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:10:58.627013704 +0000 UTC Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078899 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078967 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.078981 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.170669 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:43 crc kubenswrapper[4656]: E0128 15:19:43.170860 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181190 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181203 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.181235 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.284725 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.284965 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.284982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.285009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.285024 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388648 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388692 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388704 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388725 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.388738 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491712 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491764 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.491812 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595002 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595067 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.595107 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698733 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698780 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.698808 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.750326 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:43 crc kubenswrapper[4656]: E0128 15:19:43.750534 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:43 crc kubenswrapper[4656]: E0128 15:19:43.750626 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:59.750602131 +0000 UTC m=+90.258772935 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802454 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802511 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802524 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802545 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.802560 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906397 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906409 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906427 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.906443 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:43Z","lastTransitionTime":"2026-01-28T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:43 crc kubenswrapper[4656]: I0128 15:19:43.984782 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:16:26.101065302 +0000 UTC Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009428 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009499 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009512 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.009548 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.113783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.170229 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.170285 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:44 crc kubenswrapper[4656]: E0128 15:19:44.170384 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:44 crc kubenswrapper[4656]: E0128 15:19:44.170548 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.170597 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:44 crc kubenswrapper[4656]: E0128 15:19:44.170676 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216042 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216099 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216130 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.216144 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318259 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318270 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.318297 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421249 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421257 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.421283 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523475 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523508 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523520 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523538 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.523549 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626709 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.626743 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729075 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729090 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.729100 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831800 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831849 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831860 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831879 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.831891 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934703 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934756 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934770 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.934804 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:44Z","lastTransitionTime":"2026-01-28T15:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:44 crc kubenswrapper[4656]: I0128 15:19:44.985586 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:44:30.978594684 +0000 UTC Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.037913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.037964 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.037973 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.037989 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.038000 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140267 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140342 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.140353 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.170120 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:45 crc kubenswrapper[4656]: E0128 15:19:45.170401 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243155 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243192 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.243203 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346381 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346423 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346438 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.346448 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449810 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449828 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.449841 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554603 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554616 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.554919 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658662 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658672 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.658700 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761802 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761820 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.761834 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865785 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865827 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865837 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.865863 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970694 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970789 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.970802 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:45Z","lastTransitionTime":"2026-01-28T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:45 crc kubenswrapper[4656]: I0128 15:19:45.986415 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:29:09.2161276 +0000 UTC Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.074992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.075068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.075080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.075100 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.075112 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.169648 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:46 crc kubenswrapper[4656]: E0128 15:19:46.169817 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.169835 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.169865 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:46 crc kubenswrapper[4656]: E0128 15:19:46.170329 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:46 crc kubenswrapper[4656]: E0128 15:19:46.170449 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.170575 4656 scope.go:117] "RemoveContainer" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177556 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.177591 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280859 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280916 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280927 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.280959 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.438325 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542342 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.542427 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645546 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645587 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645616 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.645629 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748700 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748782 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.748793 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851629 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851690 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.851705 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954143 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.954155 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:46Z","lastTransitionTime":"2026-01-28T15:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:46 crc kubenswrapper[4656]: I0128 15:19:46.987456 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:42:09.731628355 +0000 UTC Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057190 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057219 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.057229 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.101004 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/1.log" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.103755 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.105189 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.143375 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160115 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160624 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.160638 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.172576 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:47 crc kubenswrapper[4656]: E0128 15:19:47.172760 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.178321 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.197677 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.219616 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.237780 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.252393 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263716 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263763 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263780 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.263790 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.270696 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.306586 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755188 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755224 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.755269 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.797877 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.817677 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.833154 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.855842 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859072 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859217 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.859379 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:47Z","lastTransitionTime":"2026-01-28T15:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.875920 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.889999 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:47 crc kubenswrapper[4656]: I0128 15:19:47.911013 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.078273 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:26:51.299213437 +0000 UTC Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081378 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081496 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.081507 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.170108 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:48 crc kubenswrapper[4656]: E0128 15:19:48.170345 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.170601 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:48 crc kubenswrapper[4656]: E0128 15:19:48.170720 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.170894 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:48 crc kubenswrapper[4656]: E0128 15:19:48.170943 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261129 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261239 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.261251 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366307 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366325 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.366627 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469846 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469904 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469923 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.469937 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573478 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573548 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573579 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.573593 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677130 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.677334 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812502 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812553 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812566 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812589 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.812604 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915657 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915755 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915775 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915809 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:48 crc kubenswrapper[4656]: I0128 15:19:48.915827 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:48Z","lastTransitionTime":"2026-01-28T15:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018499 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018538 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018567 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.018578 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.078598 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:22:48.692741952 +0000 UTC Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121647 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.121680 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.170392 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:49 crc kubenswrapper[4656]: E0128 15:19:49.170608 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.224834 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328258 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328267 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.328301 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430425 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430439 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.430449 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534629 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.534647 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638186 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638239 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638251 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.638285 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740777 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740787 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.740811 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843898 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843955 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843967 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.843998 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946246 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946321 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946451 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:49 crc kubenswrapper[4656]: I0128 15:19:49.946467 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:49Z","lastTransitionTime":"2026-01-28T15:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.079465 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:32:22.224956047 +0000 UTC Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081230 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081327 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.081355 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.122139 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/2.log" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.122961 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/1.log" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.126337 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" exitCode=1 Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.126383 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.126769 4656 scope.go:117] "RemoveContainer" containerID="d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.128210 4656 scope.go:117] "RemoveContainer" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" Jan 28 15:19:50 crc kubenswrapper[4656]: E0128 15:19:50.128844 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.149715 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.164070 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.169984 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.170008 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.169984 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:50 crc kubenswrapper[4656]: E0128 15:19:50.170106 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:50 crc kubenswrapper[4656]: E0128 15:19:50.170200 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:50 crc kubenswrapper[4656]: E0128 15:19:50.170278 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.183924 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.183966 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.183974 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.183991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.184018 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.186660 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.199132 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.215965 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.232358 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.245105 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.261790 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.278068 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287004 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287047 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.287091 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.296227 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.313078 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.334064 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.352189 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.358857 4656 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.369551 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390512 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390584 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390664 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.390682 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.397112 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.416502 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493442 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493470 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.493484 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.597182 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765345 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765364 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765401 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.765426 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869251 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.869318 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972651 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972696 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972706 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972724 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:50 crc kubenswrapper[4656]: I0128 15:19:50.972741 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:50Z","lastTransitionTime":"2026-01-28T15:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076802 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076891 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076904 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076928 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.076945 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.080334 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:35:47.164441914 +0000 UTC Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.134044 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/2.log" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163833 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163877 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.163928 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.170639 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.170755 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.189942 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.191223 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196648 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196660 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196680 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.196693 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.199291 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.214112 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.215639 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.218970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.219093 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.219109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.219133 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.219145 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.236749 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.237572 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.243917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.244019 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.244030 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.244051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.244072 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.347680 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.356009 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367194 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367239 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.367286 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.374586 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.389443 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: E0128 15:19:51.389609 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.392941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.392983 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.392996 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.393013 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.393026 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.402883 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.417771 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.435450 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.466393 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.486787 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495924 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.495965 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.509278 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.529651 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.550373 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.568918 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.587259 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599740 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599808 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599823 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.599866 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.607781 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:19:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.704942 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.705087 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.705109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.705139 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.705198 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.910911 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.910970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.911011 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.911120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:51 crc kubenswrapper[4656]: I0128 15:19:51.911140 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:51Z","lastTransitionTime":"2026-01-28T15:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015922 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.015940 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.081488 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:50:34.273287012 +0000 UTC Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119738 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119816 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.119853 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.170786 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.170931 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.170948 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:52 crc kubenswrapper[4656]: E0128 15:19:52.171109 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:52 crc kubenswrapper[4656]: E0128 15:19:52.171237 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:52 crc kubenswrapper[4656]: E0128 15:19:52.171359 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223158 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223236 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223246 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223587 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.223720 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327513 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327583 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327621 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.327634 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431545 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431619 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431644 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.431661 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534207 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534320 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.534335 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637714 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637766 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637782 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.637793 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741199 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741244 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741256 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741276 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.741287 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867584 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867595 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.867628 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970806 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970856 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:52 crc kubenswrapper[4656]: I0128 15:19:52.970875 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:52Z","lastTransitionTime":"2026-01-28T15:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074649 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074660 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074679 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.074691 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.082120 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:16:08.430420191 +0000 UTC Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.170495 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:53 crc kubenswrapper[4656]: E0128 15:19:53.170737 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181455 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181528 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181550 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.181564 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.285659 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.286254 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.286355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.286444 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.286513 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391221 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.391337 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.494985 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.495027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.495040 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.495062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.495077 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599381 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599393 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.599430 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702270 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702320 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702329 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702346 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.702356 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.804743 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.805117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.805234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.805386 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.805596 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908719 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908807 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:53 crc kubenswrapper[4656]: I0128 15:19:53.908817 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:53Z","lastTransitionTime":"2026-01-28T15:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011678 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011710 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.011720 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.083803 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:03:09.586602176 +0000 UTC Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.115901 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.115951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.115965 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.115989 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.116000 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.170275 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.170348 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.170318 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:54 crc kubenswrapper[4656]: E0128 15:19:54.170507 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:54 crc kubenswrapper[4656]: E0128 15:19:54.170662 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:54 crc kubenswrapper[4656]: E0128 15:19:54.170766 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.219234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.219688 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.220279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.220567 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.220664 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.323604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.324077 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.324245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.324429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.324591 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427805 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427852 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.427898 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530760 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530846 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.530877 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633199 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633233 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633243 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633259 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.633271 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.736134 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838304 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838339 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.838379 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941072 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941115 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941124 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941141 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:54 crc kubenswrapper[4656]: I0128 15:19:54.941181 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:54Z","lastTransitionTime":"2026-01-28T15:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043678 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043716 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043729 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.043770 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.084707 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:23:53.435213973 +0000 UTC Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.146969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.147010 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.147021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.147039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.147052 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.169702 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:55 crc kubenswrapper[4656]: E0128 15:19:55.169943 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250749 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250869 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250884 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.250930 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.355862 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.355946 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.355970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.355993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.356028 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460769 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460789 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.460802 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564454 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564534 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564548 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564572 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.564588 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667886 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.667920 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770752 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770765 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.770801 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874110 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874130 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874189 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.874209 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.982492 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.983853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.983870 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.983894 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:55 crc kubenswrapper[4656]: I0128 15:19:55.983914 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:55Z","lastTransitionTime":"2026-01-28T15:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.084833 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:35:39.822284073 +0000 UTC Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088950 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088963 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088983 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.088994 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.170365 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.170451 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.170501 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:56 crc kubenswrapper[4656]: E0128 15:19:56.170609 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:56 crc kubenswrapper[4656]: E0128 15:19:56.170926 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:56 crc kubenswrapper[4656]: E0128 15:19:56.171094 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.193362 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.327993 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.328044 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.328056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.328076 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.328088 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.431850 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.431960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.431977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.432004 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.432035 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.535919 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.536025 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.536040 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.536063 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.536082 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639506 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639554 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639583 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.639595 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743883 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743908 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.743926 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848210 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848277 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848292 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848313 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.848330 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.951657 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.952142 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.952277 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.952391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:56 crc kubenswrapper[4656]: I0128 15:19:56.952497 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:56Z","lastTransitionTime":"2026-01-28T15:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056477 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056540 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056555 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056577 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.056589 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.085039 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:35:14.068399595 +0000 UTC Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159850 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159886 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.159898 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.170330 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:57 crc kubenswrapper[4656]: E0128 15:19:57.170548 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.266289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.266786 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.266904 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.267026 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.267137 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371072 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371632 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371717 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371833 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.371906 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475219 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475257 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.475302 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579812 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579825 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.579868 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683742 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683813 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.683864 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.787932 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.787990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.788017 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.788039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.788052 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892418 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892439 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.892452 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996325 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996337 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:57 crc kubenswrapper[4656]: I0128 15:19:57.996371 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:57Z","lastTransitionTime":"2026-01-28T15:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.085635 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:01:20.299056371 +0000 UTC Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.099992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.100035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.100049 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.100071 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.100086 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.170053 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.170053 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:19:58 crc kubenswrapper[4656]: E0128 15:19:58.170331 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.170087 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:19:58 crc kubenswrapper[4656]: E0128 15:19:58.170498 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:19:58 crc kubenswrapper[4656]: E0128 15:19:58.170603 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204001 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204068 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204090 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.204104 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307060 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307124 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307139 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307183 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.307193 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.409916 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.409973 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.409987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.410019 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.410036 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513476 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.513521 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618391 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618407 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.618441 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722886 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722937 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722972 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.722984 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825860 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825919 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825939 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.825952 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929688 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929755 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929769 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:58 crc kubenswrapper[4656]: I0128 15:19:58.929804 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:58Z","lastTransitionTime":"2026-01-28T15:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033007 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033071 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033085 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.033129 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.086278 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:15:26.103876332 +0000 UTC Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137438 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.137502 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.170279 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:59 crc kubenswrapper[4656]: E0128 15:19:59.170506 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241256 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.241300 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344423 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344476 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344487 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.344523 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448805 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448841 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.448861 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552018 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552675 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552693 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552717 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.552730 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656668 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656728 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656748 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.656760 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761077 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.761888 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.797471 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:19:59 crc kubenswrapper[4656]: E0128 15:19:59.797949 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:59 crc kubenswrapper[4656]: E0128 15:19:59.798124 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:31.798080905 +0000 UTC m=+122.306251879 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866775 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866846 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866864 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866890 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.866905 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971112 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971263 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:19:59 crc kubenswrapper[4656]: I0128 15:19:59.971322 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:19:59Z","lastTransitionTime":"2026-01-28T15:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075197 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075326 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.075360 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.086798 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:10:01.869164617 +0000 UTC Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.170789 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.170873 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.170900 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:00 crc kubenswrapper[4656]: E0128 15:20:00.171029 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:00 crc kubenswrapper[4656]: E0128 15:20:00.171522 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:00 crc kubenswrapper[4656]: E0128 15:20:00.171768 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179335 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179387 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179398 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179420 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.179433 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284273 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284304 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.284316 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.387881 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.387970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.387992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.388016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.388031 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492067 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492128 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492149 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.492190 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597306 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597324 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597351 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.597365 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701036 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701099 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.701155 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804710 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804792 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804815 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.804835 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.908913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.909551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.909571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.909600 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:00 crc kubenswrapper[4656]: I0128 15:20:00.909613 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:00Z","lastTransitionTime":"2026-01-28T15:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013389 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013463 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013478 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013502 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.013522 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.086981 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:10:28.659244759 +0000 UTC Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.116863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.117262 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.117356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.117472 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.117554 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.169825 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.170479 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.193744 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.338826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.338893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.338908 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.338934 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.339010 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.354115 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.375272 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.404413 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.429182 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443494 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443582 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443595 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443618 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.443633 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.451548 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.473702 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.502667 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d356faff96585bdd00dba98190e1e0c0ecfbe5e4369e8e1efa437a24e03b3806\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:31Z\\\",\\\"message\\\":\\\"1.Pod event handler 3\\\\nI0128 15:19:31.807746 6061 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.807791 6061 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.808220 6061 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:19:31.809713 6061 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:31.809803 6061 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:19:31.809884 6061 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:31.809898 6061 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:31.809913 6061 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:31.809939 6061 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:31.809943 6061 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:31.809961 6061 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:31.809963 6061 factory.go:656] Stopping watch factory\\\\nI0128 15:19:31.809963 6061 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:31.809989 6061 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.524339 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.544215 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548545 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548678 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548693 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.548736 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.564822 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.585854 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.604340 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.623497 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.638518 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647563 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.647573 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.660491 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.662182 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666831 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666844 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666870 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.666884 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.680136 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.683233 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690047 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690087 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.690103 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.706497 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712623 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712746 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.712795 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.728971 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733866 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733937 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733950 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733972 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.733987 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.750042 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:01 crc kubenswrapper[4656]: E0128 15:20:01.750264 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.752930 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.752982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.752992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.753014 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.753029 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.856968 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.857539 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.857635 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.857747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.857829 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961620 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:01 crc kubenswrapper[4656]: I0128 15:20:01.961659 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:01Z","lastTransitionTime":"2026-01-28T15:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064653 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064673 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064717 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.064731 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.087939 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:52:55.690988503 +0000 UTC Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168502 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168554 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.168591 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.170011 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.170106 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:02 crc kubenswrapper[4656]: E0128 15:20:02.170251 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:02 crc kubenswrapper[4656]: E0128 15:20:02.170351 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.170460 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:02 crc kubenswrapper[4656]: E0128 15:20:02.177440 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272657 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272694 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.272711 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375614 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375674 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.375717 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478726 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478777 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478817 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.478832 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582797 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582859 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582878 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582898 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.582912 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686420 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686437 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686456 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.686467 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790124 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790138 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790180 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.790198 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.894392 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.894971 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.894990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.895012 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.895027 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997896 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997926 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:02 crc kubenswrapper[4656]: I0128 15:20:02.997939 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:02Z","lastTransitionTime":"2026-01-28T15:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.088717 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:58:14.826051139 +0000 UTC Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102275 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102362 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102390 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.102405 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.170362 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:03 crc kubenswrapper[4656]: E0128 15:20:03.170573 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206353 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206402 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206437 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.206450 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.310318 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413639 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413663 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.413677 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517626 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517639 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517660 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.517675 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621353 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621428 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.621439 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.725984 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.726035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.726049 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.726074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.726089 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830262 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830291 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.830302 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933460 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933513 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:03 crc kubenswrapper[4656]: I0128 15:20:03.933540 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:03Z","lastTransitionTime":"2026-01-28T15:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.037697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.038220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.038385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.038495 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.038512 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.089633 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:20:13.367807964 +0000 UTC Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.141989 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.142061 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.142075 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.142101 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.142119 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.169699 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.169720 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.169777 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:04 crc kubenswrapper[4656]: E0128 15:20:04.170298 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:04 crc kubenswrapper[4656]: E0128 15:20:04.170458 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:04 crc kubenswrapper[4656]: E0128 15:20:04.170637 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245405 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245469 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245507 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.245519 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348569 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348579 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348596 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.348607 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452191 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452251 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.452279 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555348 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555362 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.555391 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659725 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659743 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.659787 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763637 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763652 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763674 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.763690 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867202 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867229 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.867241 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.971932 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.971999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.972009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.972031 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:04 crc kubenswrapper[4656]: I0128 15:20:04.972043 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:04Z","lastTransitionTime":"2026-01-28T15:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075464 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.075500 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.090582 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:31:25.3678941 +0000 UTC Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.170647 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:05 crc kubenswrapper[4656]: E0128 15:20:05.171016 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.174030 4656 scope.go:117] "RemoveContainer" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" Jan 28 15:20:05 crc kubenswrapper[4656]: E0128 15:20:05.174851 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.181767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.181841 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.181863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.181900 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.310072 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.328939 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.345426 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.360416 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.375247 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.395756 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.416845 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.416921 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.416933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.417005 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.417021 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.421005 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.439756 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.460270 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.477007 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.496041 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.512933 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520153 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520228 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520243 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520264 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.520281 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.536721 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.562922 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.610038 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623134 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623190 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623201 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.623227 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.645729 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.677844 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.695491 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.726953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.727031 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.727049 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.727075 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.727092 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831304 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.831316 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935168 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935181 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935201 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:05 crc kubenswrapper[4656]: I0128 15:20:05.935217 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:05Z","lastTransitionTime":"2026-01-28T15:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038129 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038246 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.038258 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.091625 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:28:28.10227335 +0000 UTC Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141462 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141514 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141524 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.141554 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.169893 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.170005 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:06 crc kubenswrapper[4656]: E0128 15:20:06.170123 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.170185 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:06 crc kubenswrapper[4656]: E0128 15:20:06.170292 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:06 crc kubenswrapper[4656]: E0128 15:20:06.170405 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245570 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.245594 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.323754 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/0.log" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.323843 4656 generic.go:334] "Generic (PLEG): container finished" podID="7662a84d-d9cb-4684-b76f-c63ffeff8344" containerID="469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434" exitCode=1 Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.323888 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerDied","Data":"469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.324476 4656 scope.go:117] "RemoveContainer" containerID="469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.342214 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348837 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348895 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348946 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.348961 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.369577 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.388927 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.417765 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.433655 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.451945 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.451996 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.452009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.452032 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.452055 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.457367 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.477878 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.497488 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.516705 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.537060 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554831 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554841 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554863 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.554876 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.556404 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.573110 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.595329 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.618601 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.635830 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658071 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658089 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.658101 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.659659 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.674250 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761876 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761888 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761907 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.761919 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865842 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865884 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.865931 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969813 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969823 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969848 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:06 crc kubenswrapper[4656]: I0128 15:20:06.969861 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:06Z","lastTransitionTime":"2026-01-28T15:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073520 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073555 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.073575 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.091840 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:49:11.225859024 +0000 UTC Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.171924 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:07 crc kubenswrapper[4656]: E0128 15:20:07.172060 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176557 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176591 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176622 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.176635 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280646 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280731 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.280768 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.330024 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/0.log" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.330099 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.346573 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.359121 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.373205 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383461 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383493 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383522 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.383533 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.390191 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.401675 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.421794 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.444598 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.472057 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486580 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486612 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486620 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486636 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.486646 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.488182 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.500819 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.514963 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.527662 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.539941 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.551616 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.564865 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.581920 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.588986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.589041 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.589053 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.589070 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.589453 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.595479 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692699 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692745 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692766 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.692780 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.803938 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.804356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.804748 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.804924 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.805149 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.908268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.908762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.908865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.908976 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:07 crc kubenswrapper[4656]: I0128 15:20:07.909077 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:07Z","lastTransitionTime":"2026-01-28T15:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012030 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012445 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012543 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012658 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.012726 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.092125 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:18:57.099141849 +0000 UTC Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115690 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.115770 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.170510 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.170602 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:08 crc kubenswrapper[4656]: E0128 15:20:08.170630 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:08 crc kubenswrapper[4656]: E0128 15:20:08.170665 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.170525 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:08 crc kubenswrapper[4656]: E0128 15:20:08.171055 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218567 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218607 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218617 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218631 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.218640 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.321519 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.321925 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.322101 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.322325 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.322471 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425058 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425098 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.425139 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527559 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527598 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527621 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.527630 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629901 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629952 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629964 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629983 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.629996 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732802 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.732847 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836548 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.836614 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938409 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938459 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938469 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:08 crc kubenswrapper[4656]: I0128 15:20:08.938494 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:08Z","lastTransitionTime":"2026-01-28T15:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040737 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040776 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040804 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.040814 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.093025 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:53:13.049360163 +0000 UTC Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143350 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143365 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143385 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.143397 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.170079 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:09 crc kubenswrapper[4656]: E0128 15:20:09.170317 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246488 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246528 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246541 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246556 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.246567 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348457 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.348473 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451200 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451229 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.451240 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553854 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553923 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553949 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.553969 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656900 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656942 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656952 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656970 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.656983 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.759896 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.759969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.759995 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.760033 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.760058 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862470 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862511 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862520 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862534 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.862543 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966047 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966394 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966467 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:09 crc kubenswrapper[4656]: I0128 15:20:09.966546 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:09Z","lastTransitionTime":"2026-01-28T15:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069549 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.069727 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.093600 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:40:51.728602886 +0000 UTC Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.170094 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.170333 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:10 crc kubenswrapper[4656]: E0128 15:20:10.170415 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:10 crc kubenswrapper[4656]: E0128 15:20:10.170326 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.170474 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:10 crc kubenswrapper[4656]: E0128 15:20:10.170525 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172793 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172836 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172852 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.172864 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275576 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275592 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.275634 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383819 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383831 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383850 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.383863 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486230 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486273 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486283 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.486310 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588330 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588403 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588458 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.588478 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690353 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690394 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690412 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.690448 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793182 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793222 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.793266 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896363 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896415 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896432 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.896443 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999123 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999434 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999529 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999597 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:10 crc kubenswrapper[4656]: I0128 15:20:10.999685 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:10Z","lastTransitionTime":"2026-01-28T15:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.094692 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:45:38.141674801 +0000 UTC Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103022 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103066 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103130 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.103141 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.169818 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.170149 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.190349 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.194301 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205291 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205329 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205339 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.205370 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.211229 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.223743 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.240039 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.262699 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.278323 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.294640 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308045 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308097 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308129 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308142 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.308774 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.323440 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.333472 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.346403 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.358670 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.368689 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.385623 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.399153 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411418 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411500 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.411542 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.412155 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.423687 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513817 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513867 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513876 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513891 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.513899 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616066 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616106 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616115 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616133 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.616207 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718902 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.718913 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821680 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821762 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821782 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.821795 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852424 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852489 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852501 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852522 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.852532 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.867047 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871384 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.871396 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.886140 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891090 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891128 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891141 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.891150 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.902525 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906326 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906371 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906388 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.906397 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.918588 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922766 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922809 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.922832 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.941185 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:11Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:11 crc kubenswrapper[4656]: E0128 15:20:11.941314 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943506 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943516 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:11 crc kubenswrapper[4656]: I0128 15:20:11.943542 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:11Z","lastTransitionTime":"2026-01-28T15:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046083 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046122 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046134 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046153 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.046187 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.095941 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:07:09.758961828 +0000 UTC Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148378 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148387 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.148417 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.169731 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.169800 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:12 crc kubenswrapper[4656]: E0128 15:20:12.169910 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.169956 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:12 crc kubenswrapper[4656]: E0128 15:20:12.170049 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:12 crc kubenswrapper[4656]: E0128 15:20:12.170200 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250858 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250867 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250884 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.250893 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353403 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353453 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353465 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353484 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.353499 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460362 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460440 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460458 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460936 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.460989 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563153 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563200 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563238 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.563251 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666226 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666270 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666280 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666294 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.666304 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768482 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768546 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768560 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768599 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.768612 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872198 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872250 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872312 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.872328 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975144 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975253 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975289 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:12 crc kubenswrapper[4656]: I0128 15:20:12.975303 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:12Z","lastTransitionTime":"2026-01-28T15:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.078844 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.078918 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.078937 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.079024 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.079043 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.096398 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:00:40.837581498 +0000 UTC Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.172556 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:13 crc kubenswrapper[4656]: E0128 15:20:13.172957 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182061 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182397 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182601 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.182670 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.284942 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.285240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.285340 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.285413 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.285606 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388261 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388303 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388316 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388336 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.388349 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490706 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490730 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.490747 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592856 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592937 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592945 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592962 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.592971 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695574 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695592 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.695631 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798622 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798661 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.798678 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901145 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:13 crc kubenswrapper[4656]: I0128 15:20:13.901190 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:13Z","lastTransitionTime":"2026-01-28T15:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005383 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005460 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005483 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005512 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.005532 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.096636 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:58:54.47097829 +0000 UTC Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108003 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108096 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108137 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.108149 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.170469 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.170560 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.170974 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.171228 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.171572 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.171655 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.210941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.210978 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.210991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.211008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.211021 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314318 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314344 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314375 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.314398 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416536 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416615 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.416634 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520680 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520760 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.520789 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623154 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.623185 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726670 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726723 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.726767 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829302 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829321 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.829335 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895435 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895589 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895615 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.895591046 +0000 UTC m=+169.403761850 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895657 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895685 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.895706 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895713 4656 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895757 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.895746101 +0000 UTC m=+169.403916915 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895811 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895828 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895844 4656 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895854 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895870 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.895863254 +0000 UTC m=+169.404034058 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895875 4656 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895888 4656 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.895931 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.895919316 +0000 UTC m=+169.404090130 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.896060 4656 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: E0128 15:20:14.896256 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.896223114 +0000 UTC m=+169.404393918 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931737 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931767 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:14 crc kubenswrapper[4656]: I0128 15:20:14.931779 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:14Z","lastTransitionTime":"2026-01-28T15:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035242 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035279 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035364 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.035381 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.096968 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:58:16.387273722 +0000 UTC Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138056 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138118 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138138 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.138148 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.169923 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:15 crc kubenswrapper[4656]: E0128 15:20:15.170130 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240492 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240528 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.240539 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343148 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343223 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343234 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343255 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.343269 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445456 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445495 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445503 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445517 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.445527 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549344 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.549391 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652231 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652287 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.652328 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755696 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755742 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755771 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.755783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858186 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858227 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858238 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858255 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.858265 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961587 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961661 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961677 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961702 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:15 crc kubenswrapper[4656]: I0128 15:20:15.961746 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:15Z","lastTransitionTime":"2026-01-28T15:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064645 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064656 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064672 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.064683 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.097155 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:36:25.847152605 +0000 UTC Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167928 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167950 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167977 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.167996 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.169977 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.169982 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.170013 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:16 crc kubenswrapper[4656]: E0128 15:20:16.170278 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:16 crc kubenswrapper[4656]: E0128 15:20:16.170323 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:16 crc kubenswrapper[4656]: E0128 15:20:16.170373 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.270791 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.271115 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.271333 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.271510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.271624 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374091 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.374132 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477122 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477135 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477153 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.477186 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579838 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579873 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.579885 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682642 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682651 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.682674 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.786948 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.786991 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.787001 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.787016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.787026 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889182 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889218 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889229 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889245 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.889259 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993212 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993259 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993293 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:16 crc kubenswrapper[4656]: I0128 15:20:16.993305 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:16Z","lastTransitionTime":"2026-01-28T15:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096079 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096125 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096136 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.096186 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.098225 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:57:24.422152951 +0000 UTC Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.169716 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:17 crc kubenswrapper[4656]: E0128 15:20:17.169876 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199865 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199922 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199935 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.199972 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305572 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305605 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305618 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305632 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.305642 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410237 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410281 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.410304 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513841 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513903 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513926 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.513980 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617602 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617688 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617714 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.617742 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721148 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721209 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721243 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.721268 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825465 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825535 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825559 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825593 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.825618 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936017 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936072 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:17 crc kubenswrapper[4656]: I0128 15:20:17.936133 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:17Z","lastTransitionTime":"2026-01-28T15:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038529 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038579 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.038622 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.099200 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:14:17.79040175 +0000 UTC Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142227 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142296 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142314 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142343 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.142364 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.170228 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.170327 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:18 crc kubenswrapper[4656]: E0128 15:20:18.170416 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.170356 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:18 crc kubenswrapper[4656]: E0128 15:20:18.170606 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:18 crc kubenswrapper[4656]: E0128 15:20:18.170841 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245351 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245390 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.245406 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348691 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348805 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348824 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348894 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.348904 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.467505 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.467837 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.467934 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.468069 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.468193 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.571998 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.572062 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.572082 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.572109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.572128 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.674568 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.674872 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.674966 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.675058 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.675139 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.779382 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.780671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.780893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.781095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.781393 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906653 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906702 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906732 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:18 crc kubenswrapper[4656]: I0128 15:20:18.906745 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:18Z","lastTransitionTime":"2026-01-28T15:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009406 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009436 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009444 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009458 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.009467 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.100093 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:41:06.603566603 +0000 UTC Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113111 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.113201 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.170555 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:19 crc kubenswrapper[4656]: E0128 15:20:19.170735 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.216686 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.217077 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.217274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.217452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.217680 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320135 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320254 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320307 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.320325 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423642 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423702 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423726 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423757 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.423783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526430 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526473 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526670 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.526698 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628737 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628752 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628769 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.628781 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731588 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731639 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731652 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731671 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.731684 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834516 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834558 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834578 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.834617 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938217 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938255 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:19 crc kubenswrapper[4656]: I0128 15:20:19.938319 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:19Z","lastTransitionTime":"2026-01-28T15:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.040933 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.040992 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.041010 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.041034 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.041048 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.100636 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:09:43.052797393 +0000 UTC Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.148951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.148999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.149015 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.149044 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.149063 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.170479 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.170700 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:20 crc kubenswrapper[4656]: E0128 15:20:20.170703 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.170797 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:20 crc kubenswrapper[4656]: E0128 15:20:20.170941 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:20 crc kubenswrapper[4656]: E0128 15:20:20.171835 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.172334 4656 scope.go:117] "RemoveContainer" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251570 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251610 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251621 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251635 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.251645 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354016 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354114 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.354183 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456683 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456728 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456741 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456760 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.456775 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.558905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.558990 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.559009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.559034 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.559047 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661857 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661902 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661913 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661934 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.661948 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764490 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764532 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764541 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764557 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.764566 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866525 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866585 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.866594 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968918 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968963 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968981 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:20 crc kubenswrapper[4656]: I0128 15:20:20.968993 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:20Z","lastTransitionTime":"2026-01-28T15:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071290 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071323 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071332 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.071362 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.101094 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:10:07.758839126 +0000 UTC Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.169834 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:21 crc kubenswrapper[4656]: E0128 15:20:21.170002 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173390 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173419 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173427 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173441 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.173460 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.186337 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.200476 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.222391 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.237010 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.250485 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.263905 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276724 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276750 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.276759 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.285808 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.300311 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.314042 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.327582 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.341564 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.352710 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.363430 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.375197 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378881 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378915 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378938 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.378969 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.389264 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.389927 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/2.log" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.392742 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" exitCode=1 Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.392783 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.392839 4656 scope.go:117] "RemoveContainer" containerID="62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.395930 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:20:21 crc kubenswrapper[4656]: E0128 15:20:21.396272 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.401837 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59b72fa-6f07-4658-b277-0b10b8bf83a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f7605b956da7648bb4ea64104ebddadf45a4297723b28d1813ec330122f9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4f0a931b81775cf3bcadec1c2d278079e6d6c08334a5d412f957ea057000a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33daf9c576813489c0d122bf8b57511d33c442f5c4f81c8a1ba17b349d04d4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be91e89edee3d65aa4855a9e6e4354e182726e95ba57165fbebc4e1b334a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25512a305427a2fbc0dc915cc3dfb21cadd3db472a2764f1b5a686d60ec422e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.417798 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.434234 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.445319 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.455204 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.468590 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481455 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481515 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481551 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481564 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.481594 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.507710 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59b72fa-6f07-4658-b277-0b10b8bf83a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f7605b956da7648bb4ea64104ebddadf45a4297723b28d1813ec330122f9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4f0a931b81775cf3bcadec1c2d278079e6d6c08334a5d412f957ea057000a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33daf9c576813489c0d122bf8b57511d33c442f5c4f81c8a1ba17b349d04d4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be91e89edee3d65aa4855a9e6e4354e182726e95ba57165fbebc4e1b334a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25512a305427a2fbc0dc915cc3dfb21cadd3db472a2764f1b5a686d60ec422e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.522915 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.534770 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.549846 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.562022 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.576684 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584579 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584625 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584638 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584657 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.584670 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.595858 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:21Z\\\",\\\"message\\\":\\\"73 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:20:21.222738 6673 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.610771 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.623067 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.633393 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.646101 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.656853 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.667025 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.679106 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686669 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686706 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686716 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686734 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.686747 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.693030 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788117 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788132 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.788143 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891018 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891074 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891092 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.891103 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993804 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993853 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993866 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993885 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:21 crc kubenswrapper[4656]: I0128 15:20:21.993897 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:21Z","lastTransitionTime":"2026-01-28T15:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.118871 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:49:08.362358 +0000 UTC Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.120960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.120999 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.121009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.121036 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.121055 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.170518 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.170581 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.170681 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.170775 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.170830 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.170882 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224370 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224417 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224429 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224449 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.224463 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316220 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316262 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316271 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316288 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.316298 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.329730 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333788 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333800 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333819 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.333828 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.347757 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353523 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353584 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353606 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.353619 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.366857 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.371953 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.371987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.372000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.372015 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.372027 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.387268 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391755 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391778 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391802 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.391859 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.399506 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.411099 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:22 crc kubenswrapper[4656]: E0128 15:20:22.411290 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412798 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412817 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412825 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412840 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.412849 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516033 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516107 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516127 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.516184 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665135 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665248 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.665268 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768272 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768337 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768358 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768382 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.768399 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871356 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871387 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.871419 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973338 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973372 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973389 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:22 crc kubenswrapper[4656]: I0128 15:20:22.973422 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:22Z","lastTransitionTime":"2026-01-28T15:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076057 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076092 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076102 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076119 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.076134 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.119261 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 14:33:20.452983085 +0000 UTC Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.170263 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:23 crc kubenswrapper[4656]: E0128 15:20:23.170552 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178151 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178225 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178300 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.178318 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281600 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281683 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281713 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281739 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.281757 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385517 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385581 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385601 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385629 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.385648 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488689 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488729 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488755 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488775 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.488784 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591608 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591654 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591666 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591685 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.591696 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699466 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699486 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.699503 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803240 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803280 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803291 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803308 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.803320 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.905922 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.905969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.905978 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.905994 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:23 crc kubenswrapper[4656]: I0128 15:20:23.906004 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:23Z","lastTransitionTime":"2026-01-28T15:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.008959 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.009000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.009008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.009023 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.009033 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112463 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112510 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112527 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112549 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.112562 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.119706 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:57:40.509076986 +0000 UTC Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.170383 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.170409 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.170479 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:24 crc kubenswrapper[4656]: E0128 15:20:24.170551 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:24 crc kubenswrapper[4656]: E0128 15:20:24.170602 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:24 crc kubenswrapper[4656]: E0128 15:20:24.170650 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214633 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214668 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214676 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214691 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.214701 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317048 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317076 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.317093 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.419951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.420004 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.420020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.420055 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.420072 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522783 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522833 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.522862 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.625939 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.625982 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.626021 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.626050 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.626065 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728530 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728580 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728591 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728614 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.728626 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.830986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.831054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.831069 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.831095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.831109 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933447 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933485 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933493 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933509 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:24 crc kubenswrapper[4656]: I0128 15:20:24.933519 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:24Z","lastTransitionTime":"2026-01-28T15:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036796 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036861 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036883 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036911 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.036928 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.120765 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:37:14.924377772 +0000 UTC Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139795 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139827 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139839 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139857 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.139870 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.169683 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:25 crc kubenswrapper[4656]: E0128 15:20:25.169808 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243286 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243331 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243347 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243369 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.243384 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345785 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345842 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345854 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345871 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.345884 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448751 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448779 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448787 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.448810 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.551959 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.552000 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.552009 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.552026 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.552038 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655354 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655396 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655408 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655425 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.655438 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759504 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759573 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759589 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759616 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.759633 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862799 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862861 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862882 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862906 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.862924 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966469 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966540 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966562 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966594 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:25 crc kubenswrapper[4656]: I0128 15:20:25.966619 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:25Z","lastTransitionTime":"2026-01-28T15:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.068981 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.069257 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.069274 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.069295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.069307 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.121317 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:10:15.735468271 +0000 UTC Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.170019 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.170073 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.170098 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:26 crc kubenswrapper[4656]: E0128 15:20:26.170140 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:26 crc kubenswrapper[4656]: E0128 15:20:26.170205 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:26 crc kubenswrapper[4656]: E0128 15:20:26.170755 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171851 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171868 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171883 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.171894 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275035 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275104 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275124 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275149 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.275214 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.377898 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.377941 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.377952 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.378627 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.378664 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482344 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482390 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482399 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482414 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.482423 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585282 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585329 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585342 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585360 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.585372 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688452 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688504 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688520 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688544 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.688564 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790839 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790893 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790923 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.790933 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.894917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.894986 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.895003 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.895027 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:26 crc kubenswrapper[4656]: I0128 15:20:26.895059 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:26Z","lastTransitionTime":"2026-01-28T15:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057349 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057425 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057438 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057458 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.057469 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.121648 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:59:03.340220982 +0000 UTC Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160355 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160434 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160448 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160467 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.160480 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.169761 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:27 crc kubenswrapper[4656]: E0128 15:20:27.169923 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262729 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262768 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262777 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262793 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.262806 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365420 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365469 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365478 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365494 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.365505 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468542 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468599 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468616 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.468626 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572014 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572073 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572092 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.572104 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674586 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674717 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674736 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.674784 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777024 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777079 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777089 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777108 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.777119 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879244 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879285 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879293 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879326 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.879336 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982266 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982328 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982343 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982366 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:27 crc kubenswrapper[4656]: I0128 15:20:27.982380 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:27Z","lastTransitionTime":"2026-01-28T15:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085363 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085401 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085411 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085431 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.085442 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.121865 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:13:20.134459222 +0000 UTC Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.170041 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.170122 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.170284 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:28 crc kubenswrapper[4656]: E0128 15:20:28.170270 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:28 crc kubenswrapper[4656]: E0128 15:20:28.170434 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:28 crc kubenswrapper[4656]: E0128 15:20:28.170529 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.185186 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224714 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224761 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224790 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.224804 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.326964 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.327007 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.327020 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.327039 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.327052 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429059 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429126 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429237 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.429261 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533325 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533379 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533398 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533427 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.533445 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636365 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636400 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636410 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636424 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.636434 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739113 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739155 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739184 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739236 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.739248 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841557 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841595 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841604 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841622 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.841642 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944705 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944753 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944764 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944786 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:28 crc kubenswrapper[4656]: I0128 15:20:28.944798 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:28Z","lastTransitionTime":"2026-01-28T15:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047150 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047205 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047214 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047228 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.047237 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.122829 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:24:50.833541604 +0000 UTC Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150200 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150269 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150278 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150297 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.150308 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.170369 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:29 crc kubenswrapper[4656]: E0128 15:20:29.170889 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252697 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252739 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252754 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252772 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.252783 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355465 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355522 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355537 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355584 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.355596 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457531 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457561 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457571 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457590 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.457602 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560634 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560662 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560672 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560687 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.560697 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662822 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662878 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662888 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662904 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.662918 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765670 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765707 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765718 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765735 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.765746 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868917 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868957 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868969 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868987 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.868999 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980568 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980625 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980641 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980661 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:29 crc kubenswrapper[4656]: I0128 15:20:29.980675 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:29Z","lastTransitionTime":"2026-01-28T15:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083765 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083818 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083829 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083847 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.083859 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.123255 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:23:34.377513291 +0000 UTC Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.169592 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.169604 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.169781 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:30 crc kubenswrapper[4656]: E0128 15:20:30.170133 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:30 crc kubenswrapper[4656]: E0128 15:20:30.170255 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:30 crc kubenswrapper[4656]: E0128 15:20:30.170338 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186215 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186243 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186252 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186268 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.186277 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289078 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289120 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289131 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289152 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.289184 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391612 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391655 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391665 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.391697 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.493962 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.494023 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.494041 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.494065 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.494085 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596392 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596447 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596459 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596478 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.596491 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699051 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699095 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699109 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699187 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.699203 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802367 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802407 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802414 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802433 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.802452 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.904995 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.905054 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.905066 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.905088 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:30 crc kubenswrapper[4656]: I0128 15:20:30.905100 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:30Z","lastTransitionTime":"2026-01-28T15:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007826 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007876 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007887 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007905 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.007917 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:31Z","lastTransitionTime":"2026-01-28T15:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:31 crc kubenswrapper[4656]: E0128 15:20:31.108777 4656 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.124099 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:51:34.234253786 +0000 UTC Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.169825 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:31 crc kubenswrapper[4656]: E0128 15:20:31.169968 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.195396 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.210546 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.224867 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.239357 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.258719 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.285618 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62876cbc6989cd50d6455e40ccf8b4284449ee5afe8e4e19746c9ffd66c3c42b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:19:49Z\\\",\\\"message\\\":\\\"ng *v1.Pod event handler 3 for removal\\\\nI0128 15:19:48.996370 6342 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:19:48.996423 6342 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:19:48.996464 6342 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:19:48.996561 6342 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:19:48.996586 6342 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:19:48.996626 6342 factory.go:656] Stopping watch factory\\\\nI0128 15:19:48.996662 6342 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:19:48.996688 6342 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:19:48.996724 6342 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:19:48.996311 6342 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:19:48.996909 6342 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:19:48.996733 6342 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:19:48.996749 6342 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:19:48.996771 6342 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:21Z\\\",\\\"message\\\":\\\"73 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:20:21.222738 6673 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.301366 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.316277 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.330459 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.345925 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.359261 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.372065 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.397668 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59b72fa-6f07-4658-b277-0b10b8bf83a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f7605b956da7648bb4ea64104ebddadf45a4297723b28d1813ec330122f9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4f0a931b81775cf3bcadec1c2d278079e6d6c08334a5d412f957ea057000a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33daf9c576813489c0d122bf8b57511d33c442f5c4f81c8a1ba17b349d04d4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be91e89edee3d65aa4855a9e6e4354e182726e95ba57165fbebc4e1b334a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25512a305427a2fbc0dc915cc3dfb21cadd3db472a2764f1b5a686d60ec422e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.409590 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.424945 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.439235 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.452204 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d4beba1-dc60-4190-925e-bd0c0d6deee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cadf761b9301aaeea19fad51cfac7b4aa80f49ae5e0fadf4eababc2c5bb945b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://826db300be47e8ade08ecd18880a53f4ce70b3b8f4ffbcd327fec2f952b0168d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5edae8f60ea42bc0f7cee0c415afdb634b13222f6a9b1bbac9e15d6b3ec3867\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.463725 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.473358 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:31 crc kubenswrapper[4656]: I0128 15:20:31.886596 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:31 crc kubenswrapper[4656]: E0128 15:20:31.886917 4656 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:20:31 crc kubenswrapper[4656]: E0128 15:20:31.887030 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs podName:11320542-8463-40db-8981-632be2bd5a48 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:35.887002634 +0000 UTC m=+186.395173448 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs") pod "network-metrics-daemon-bmj6r" (UID: "11320542-8463-40db-8981-632be2bd5a48") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.046003 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.125006 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:09:13.782569124 +0000 UTC Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.170598 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.170634 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.170792 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.170916 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.171285 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.171367 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684155 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684211 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684221 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684295 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.684309 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.702355 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706046 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706080 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706091 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706107 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.706119 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.722328 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726682 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726747 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726759 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726774 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.726836 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.743361 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748353 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748416 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748428 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748448 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.748460 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.766528 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770922 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770951 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770960 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770975 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:32 crc kubenswrapper[4656]: I0128 15:20:32.770985 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:32Z","lastTransitionTime":"2026-01-28T15:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.788410 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c05dbb0a-1aab-49df-9964-1b1f0273dfec\\\",\\\"systemUUID\\\":\\\"a40465ae-d87c-4dd5-a6fc-ca512905e140\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:32 crc kubenswrapper[4656]: E0128 15:20:32.788549 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.125527 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:05:17.224231566 +0000 UTC Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.170555 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:33 crc kubenswrapper[4656]: E0128 15:20:33.171250 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.171678 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:20:33 crc kubenswrapper[4656]: E0128 15:20:33.172014 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.186264 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962b117d-bfcf-4c25-a0db-306e773ac59e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13de3396eb19734b695657f2c1b1960c0ee5472b1f93248b268c1290bb52587f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f7a7c8b82219f6bbe206717ede3013233ade8b8d288897923ed434cf5c3072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.208102 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-854tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f9a9023-4c07-4c93-b4d6-9034873ace37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a18819209b423cd8f595d0ebbce07c1abc61e63acd185ffa5743b4e7779541b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14188e85f401f32f9d1e82f24fcbf8a2aa37b7f910146379b732bd948221de69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1e46d1ce8eb0d139551a9dbf2267ce3c47879624321466e2db44464a519ae6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d916946cd0eb6d17f617b8fdd7d433184baaf77f51536334ea3c94c35de3f3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d623a0f238613fd7d0d7828f85447f9ba5fa8f90fe85bba7cecfb81d107cf9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23352506220d8dac15f9a04a8410c34a21f113f7f10eb14e8408e8bb4fd24516\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a8e30e1d082d802a19ef5383ce4512dd353c4994983851c7462b728c4f7fc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcmdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-854tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.222511 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2dc751f932f33a3d9b60a28a7862260c2540d659f8c12630108a7247d687c7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.246848 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c59b72fa-6f07-4658-b277-0b10b8bf83a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f7605b956da7648bb4ea64104ebddadf45a4297723b28d1813ec330122f9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4f0a931b81775cf3bcadec1c2d278079e6d6c08334a5d412f957ea057000a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33daf9c576813489c0d122bf8b57511d33c442f5c4f81c8a1ba17b349d04d4da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9be91e89edee3d65aa4855a9e6e4354e182726e95ba57165fbebc4e1b334a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25512a305427a2fbc0dc915cc3dfb21cadd3db472a2764f1b5a686d60ec422e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cbbc1ec9b740fc4f7904482c7828ae8ef39c1b7f440d83f3e05f06c96bcf0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2208acd741851e0b1169b6e4824581794a0115f0e802d9b09b728078f05a45f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa42b9d45549e6850eebf3656c06a20c7ec80a7ab6c4f0e9b643f7c43399cda0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.265721 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.277201 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11320542-8463-40db-8981-632be2bd5a48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bmj6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.291448 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d4beba1-dc60-4190-925e-bd0c0d6deee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cadf761b9301aaeea19fad51cfac7b4aa80f49ae5e0fadf4eababc2c5bb945b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://826db300be47e8ade08ecd18880a53f4ce70b3b8f4ffbcd327fec2f952b0168d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5edae8f60ea42bc0f7cee0c415afdb634b13222f6a9b1bbac9e15d6b3ec3867\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.308182 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a01ee0c8b6fc250c9664c58a947304c136dc30f9590adca34a147b5b36b26822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.321902 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c695w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5994c1d0-57bd-4f0d-a63f-6e0f54746c3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e361eebec4ca798ac60c82c76e502f57ba96d6369573a6cacb242cecee5c1bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4pbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c695w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.338229 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rpzjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7662a84d-d9cb-4684-b76f-c63ffeff8344\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:05Z\\\",\\\"message\\\":\\\"2026-01-28T15:19:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9\\\\n2026-01-28T15:19:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cdb26539-851e-4d1d-a39e-83f7077799c9 to /host/opt/cni/bin/\\\\n2026-01-28T15:19:20Z [verbose] multus-daemon started\\\\n2026-01-28T15:19:20Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:20:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l84dh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rpzjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.360795 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748c84b-daec-4bf0-bda9-180d379ab075\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:20:21Z\\\",\\\"message\\\":\\\"73 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:21Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:20:21.222738 6673 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-diagnostics/network-check-target_TCP_cluster\\\\\\\", UUID:\\\\\\\"7594bb65-e742-44b3-a975-d639b1128be5\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:20:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:19:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68qp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kwnzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.388001 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7112154f-4499-48ec-9135-6f4a26eca33a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://544d8a210aa75da4cd5f655fdcf3e963b31308a372eb5ed0b0d8ebca82b6182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c611f847965dddd0cf0e242ffdd2201ff346baef8d8a997c7077f4f50188d6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2mq85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-b6g2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.407307 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b98e9cadd50ebc51646f9dfac349e4e1c3540bbc794b9a0c712d1d3babb061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3136515928cf96ec477e6d5fe1b367ffe77ca16cc2cea23c87b4f1fb3bc8aa16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.421794 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d876dfb2-6c3f-4e7d-8850-c7e97b36058b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901b41e0eae305224cf5ba0f81dd16a3269d312e3155b29a018bc280abbfba6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://723d0e0fee1b4e71b70f58531d510ef5c3d2cad0262a0e4f9218ec03d9a0d4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0f1e1462d8249db7fbf4cca7ee46cdce8299eef9e99b7c6c008164e3ad0d9bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1a12382b24c97189200caac00ac4720d4cb419cef33698debe6355e9f28c9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.439607 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.452264 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55xm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54aabb4-c2b7-4000-927d-c71f81572645\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9b66b34712673f0ea1b2d146fbaa54902433e2edfdf7e28766319303fd4524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5l8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55xm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.470193 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06d899c2-5ac5-4760-b71a-06c970fdc9fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e83b8e8bf35c2af59c4df410f1190732d8d56c8c766411f212044138817258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tb2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:19:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8llkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.489047 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:33 crc kubenswrapper[4656]: I0128 15:20:33.506849 4656 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce9a6c7-62ad-4d0e-955e-dcb43dac9226\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:19:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:18:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:19:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:19:01.434180 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:19:01.434698 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:19:01.436658 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3772564301/tls.crt::/tmp/serving-cert-3772564301/tls.key\\\\\\\"\\\\nI0128 15:19:02.051801 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:19:02.056263 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:19:02.056305 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:19:02.056363 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:19:02.056372 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:19:02.073545 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:19:02.073593 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073600 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:19:02.073605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:19:02.073609 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:19:02.073612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:19:02.073616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:19:02.074182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:19:02.077241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:18:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:18:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:18:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:18:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:20:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:20:34 crc kubenswrapper[4656]: I0128 15:20:34.125809 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:19:40.466056946 +0000 UTC Jan 28 15:20:34 crc kubenswrapper[4656]: I0128 15:20:34.170513 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:34 crc kubenswrapper[4656]: I0128 15:20:34.170640 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:34 crc kubenswrapper[4656]: I0128 15:20:34.170689 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:34 crc kubenswrapper[4656]: E0128 15:20:34.171786 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:34 crc kubenswrapper[4656]: E0128 15:20:34.171969 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:34 crc kubenswrapper[4656]: E0128 15:20:34.172195 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:35 crc kubenswrapper[4656]: I0128 15:20:35.126911 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:34:44.061147139 +0000 UTC Jan 28 15:20:35 crc kubenswrapper[4656]: I0128 15:20:35.169676 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:35 crc kubenswrapper[4656]: E0128 15:20:35.170061 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:36 crc kubenswrapper[4656]: I0128 15:20:36.127799 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:48:34.638280197 +0000 UTC Jan 28 15:20:36 crc kubenswrapper[4656]: I0128 15:20:36.173236 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:36 crc kubenswrapper[4656]: E0128 15:20:36.173558 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:36 crc kubenswrapper[4656]: I0128 15:20:36.174250 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:36 crc kubenswrapper[4656]: E0128 15:20:36.174316 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:36 crc kubenswrapper[4656]: I0128 15:20:36.174451 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:36 crc kubenswrapper[4656]: E0128 15:20:36.174514 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:37 crc kubenswrapper[4656]: E0128 15:20:37.047362 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:37 crc kubenswrapper[4656]: I0128 15:20:37.128124 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:11:58.117489854 +0000 UTC Jan 28 15:20:37 crc kubenswrapper[4656]: I0128 15:20:37.170659 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:37 crc kubenswrapper[4656]: E0128 15:20:37.170968 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:38 crc kubenswrapper[4656]: I0128 15:20:38.129089 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:02:43.085227007 +0000 UTC Jan 28 15:20:38 crc kubenswrapper[4656]: I0128 15:20:38.170450 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:38 crc kubenswrapper[4656]: I0128 15:20:38.170516 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:38 crc kubenswrapper[4656]: I0128 15:20:38.170453 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:38 crc kubenswrapper[4656]: E0128 15:20:38.170668 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:38 crc kubenswrapper[4656]: E0128 15:20:38.170812 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:38 crc kubenswrapper[4656]: E0128 15:20:38.170964 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:39 crc kubenswrapper[4656]: I0128 15:20:39.129501 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:48:11.645608866 +0000 UTC Jan 28 15:20:39 crc kubenswrapper[4656]: I0128 15:20:39.170811 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:39 crc kubenswrapper[4656]: E0128 15:20:39.171385 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:40 crc kubenswrapper[4656]: I0128 15:20:40.130431 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:30:17.616396151 +0000 UTC Jan 28 15:20:40 crc kubenswrapper[4656]: I0128 15:20:40.170550 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:40 crc kubenswrapper[4656]: E0128 15:20:40.170680 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:40 crc kubenswrapper[4656]: I0128 15:20:40.170569 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:40 crc kubenswrapper[4656]: E0128 15:20:40.170775 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:40 crc kubenswrapper[4656]: I0128 15:20:40.170550 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:40 crc kubenswrapper[4656]: E0128 15:20:40.170951 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.131195 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:16:06.657941844 +0000 UTC Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.169865 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:41 crc kubenswrapper[4656]: E0128 15:20:41.169989 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.212967 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=13.212945189 podStartE2EDuration="13.212945189s" podCreationTimestamp="2026-01-28 15:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.212646561 +0000 UTC m=+131.720817365" watchObservedRunningTime="2026-01-28 15:20:41.212945189 +0000 UTC m=+131.721115993" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.268494 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.269597 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:20:41 crc kubenswrapper[4656]: E0128 15:20:41.269790 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.320092 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-c695w" podStartSLOduration=96.32007015 podStartE2EDuration="1m36.32007015s" podCreationTimestamp="2026-01-28 15:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.318912387 +0000 UTC m=+131.827083181" watchObservedRunningTime="2026-01-28 15:20:41.32007015 +0000 UTC m=+131.828240974" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.341787 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rpzjg" podStartSLOduration=95.341766202 podStartE2EDuration="1m35.341766202s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.341518255 +0000 UTC m=+131.849689059" watchObservedRunningTime="2026-01-28 15:20:41.341766202 +0000 UTC m=+131.849937016" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.482193 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-b6g2q" podStartSLOduration=94.482138741 podStartE2EDuration="1m34.482138741s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.458145642 +0000 UTC m=+131.966316446" watchObservedRunningTime="2026-01-28 15:20:41.482138741 +0000 UTC m=+131.990309545" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.482487 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.482481771 podStartE2EDuration="1m21.482481771s" podCreationTimestamp="2026-01-28 15:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.480731151 +0000 UTC m=+131.988901965" watchObservedRunningTime="2026-01-28 15:20:41.482481771 +0000 UTC m=+131.990652575" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.500104 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=73.500082267 podStartE2EDuration="1m13.500082267s" podCreationTimestamp="2026-01-28 15:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.499124299 +0000 UTC m=+132.007295133" watchObservedRunningTime="2026-01-28 15:20:41.500082267 +0000 UTC m=+132.008253071" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.541917 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-55xm4" podStartSLOduration=96.541892319 podStartE2EDuration="1m36.541892319s" podCreationTimestamp="2026-01-28 15:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.527821594 +0000 UTC m=+132.035992398" watchObservedRunningTime="2026-01-28 15:20:41.541892319 +0000 UTC m=+132.050063123" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.542423 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podStartSLOduration=95.542416554 podStartE2EDuration="1m35.542416554s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.541334793 +0000 UTC m=+132.049505607" watchObservedRunningTime="2026-01-28 15:20:41.542416554 +0000 UTC m=+132.050587368" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.587774 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=30.587748867 podStartE2EDuration="30.587748867s" podCreationTimestamp="2026-01-28 15:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.586304925 +0000 UTC m=+132.094475729" watchObservedRunningTime="2026-01-28 15:20:41.587748867 +0000 UTC m=+132.095919671" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.628994 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=50.628970511 podStartE2EDuration="50.628970511s" podCreationTimestamp="2026-01-28 15:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.605714703 +0000 UTC m=+132.113885507" watchObservedRunningTime="2026-01-28 15:20:41.628970511 +0000 UTC m=+132.137141315" Jan 28 15:20:41 crc kubenswrapper[4656]: I0128 15:20:41.645932 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-854tp" podStartSLOduration=95.645905048 podStartE2EDuration="1m35.645905048s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:41.629429715 +0000 UTC m=+132.137600519" watchObservedRunningTime="2026-01-28 15:20:41.645905048 +0000 UTC m=+132.154075852" Jan 28 15:20:42 crc kubenswrapper[4656]: E0128 15:20:42.058675 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:42 crc kubenswrapper[4656]: I0128 15:20:42.132041 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:19:51.925733387 +0000 UTC Jan 28 15:20:42 crc kubenswrapper[4656]: I0128 15:20:42.170756 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:42 crc kubenswrapper[4656]: I0128 15:20:42.170900 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:42 crc kubenswrapper[4656]: I0128 15:20:42.170902 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:42 crc kubenswrapper[4656]: E0128 15:20:42.171040 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:42 crc kubenswrapper[4656]: E0128 15:20:42.171119 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:42 crc kubenswrapper[4656]: E0128 15:20:42.171267 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.010311 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.011801 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.011914 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.012008 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.012190 4656 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:20:43Z","lastTransitionTime":"2026-01-28T15:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.080965 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz"] Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.081955 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.088981 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.089186 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.088982 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.088988 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.131670 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.131977 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132100 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132209 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132315 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132355 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:56:34.764066791 +0000 UTC Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.132407 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.142643 4656 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.170306 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:43 crc kubenswrapper[4656]: E0128 15:20:43.170746 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.233756 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.234090 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.234318 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.233997 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.234442 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.234493 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.235041 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.235235 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.246371 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.266245 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1b0f1cb-b153-4b92-9ec6-30200cdab7d3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-25drz\" (UID: \"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.401822 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" Jan 28 15:20:43 crc kubenswrapper[4656]: I0128 15:20:43.501859 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" event={"ID":"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3","Type":"ContainerStarted","Data":"505c9039575e525ff6948f41b5e315fdcf04c83ae9620408f603397bf19d9ef6"} Jan 28 15:20:44 crc kubenswrapper[4656]: I0128 15:20:44.170384 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:44 crc kubenswrapper[4656]: I0128 15:20:44.170423 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:44 crc kubenswrapper[4656]: I0128 15:20:44.170513 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:44 crc kubenswrapper[4656]: E0128 15:20:44.171424 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:44 crc kubenswrapper[4656]: E0128 15:20:44.171508 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:44 crc kubenswrapper[4656]: E0128 15:20:44.171591 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:44 crc kubenswrapper[4656]: I0128 15:20:44.511877 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" event={"ID":"d1b0f1cb-b153-4b92-9ec6-30200cdab7d3","Type":"ContainerStarted","Data":"318c5b9d2f9f708ec416642b521a622934f3ca6935742ea695ff42fcd5a62316"} Jan 28 15:20:45 crc kubenswrapper[4656]: I0128 15:20:45.169808 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:45 crc kubenswrapper[4656]: E0128 15:20:45.169946 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:46 crc kubenswrapper[4656]: I0128 15:20:46.169879 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:46 crc kubenswrapper[4656]: I0128 15:20:46.169981 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:46 crc kubenswrapper[4656]: I0128 15:20:46.170105 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:46 crc kubenswrapper[4656]: E0128 15:20:46.170367 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:46 crc kubenswrapper[4656]: E0128 15:20:46.170444 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:46 crc kubenswrapper[4656]: E0128 15:20:46.170279 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:47 crc kubenswrapper[4656]: E0128 15:20:47.059917 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:47 crc kubenswrapper[4656]: I0128 15:20:47.170619 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:47 crc kubenswrapper[4656]: E0128 15:20:47.170825 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:48 crc kubenswrapper[4656]: I0128 15:20:48.170309 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:48 crc kubenswrapper[4656]: E0128 15:20:48.170707 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:48 crc kubenswrapper[4656]: I0128 15:20:48.170394 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:48 crc kubenswrapper[4656]: I0128 15:20:48.170327 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:48 crc kubenswrapper[4656]: E0128 15:20:48.171287 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:48 crc kubenswrapper[4656]: E0128 15:20:48.171440 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:49 crc kubenswrapper[4656]: I0128 15:20:49.169902 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:49 crc kubenswrapper[4656]: E0128 15:20:49.170456 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:50 crc kubenswrapper[4656]: I0128 15:20:50.169593 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:50 crc kubenswrapper[4656]: I0128 15:20:50.169655 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:50 crc kubenswrapper[4656]: E0128 15:20:50.169767 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:50 crc kubenswrapper[4656]: E0128 15:20:50.169849 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:50 crc kubenswrapper[4656]: I0128 15:20:50.169621 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:50 crc kubenswrapper[4656]: E0128 15:20:50.170341 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:51 crc kubenswrapper[4656]: I0128 15:20:51.171400 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:51 crc kubenswrapper[4656]: E0128 15:20:51.173350 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.061591 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.169818 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.169861 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.169929 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.170030 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.170370 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.170497 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.539813 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/1.log" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.540726 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/0.log" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.540853 4656 generic.go:334] "Generic (PLEG): container finished" podID="7662a84d-d9cb-4684-b76f-c63ffeff8344" containerID="c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638" exitCode=1 Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.540927 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerDied","Data":"c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638"} Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.540999 4656 scope.go:117] "RemoveContainer" containerID="469a73da55b1f73c720dde942f37fe36a83d27c5243f1907911e9f7e12474434" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.541484 4656 scope.go:117] "RemoveContainer" containerID="c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638" Jan 28 15:20:52 crc kubenswrapper[4656]: E0128 15:20:52.541718 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rpzjg_openshift-multus(7662a84d-d9cb-4684-b76f-c63ffeff8344)\"" pod="openshift-multus/multus-rpzjg" podUID="7662a84d-d9cb-4684-b76f-c63ffeff8344" Jan 28 15:20:52 crc kubenswrapper[4656]: I0128 15:20:52.563373 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-25drz" podStartSLOduration=106.563351109 podStartE2EDuration="1m46.563351109s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:44.531790789 +0000 UTC m=+135.039961593" watchObservedRunningTime="2026-01-28 15:20:52.563351109 +0000 UTC m=+143.071521913" Jan 28 15:20:53 crc kubenswrapper[4656]: I0128 15:20:53.170458 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:53 crc kubenswrapper[4656]: E0128 15:20:53.170892 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:53 crc kubenswrapper[4656]: I0128 15:20:53.171212 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:20:53 crc kubenswrapper[4656]: E0128 15:20:53.171390 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kwnzt_openshift-ovn-kubernetes(5748c84b-daec-4bf0-bda9-180d379ab075)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" Jan 28 15:20:53 crc kubenswrapper[4656]: I0128 15:20:53.545799 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/1.log" Jan 28 15:20:54 crc kubenswrapper[4656]: I0128 15:20:54.170374 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:54 crc kubenswrapper[4656]: I0128 15:20:54.170406 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:54 crc kubenswrapper[4656]: E0128 15:20:54.171070 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:54 crc kubenswrapper[4656]: E0128 15:20:54.171249 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:54 crc kubenswrapper[4656]: I0128 15:20:54.170406 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:54 crc kubenswrapper[4656]: E0128 15:20:54.171570 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:55 crc kubenswrapper[4656]: I0128 15:20:55.170067 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:55 crc kubenswrapper[4656]: E0128 15:20:55.170896 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:56 crc kubenswrapper[4656]: I0128 15:20:56.169851 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:56 crc kubenswrapper[4656]: I0128 15:20:56.169927 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:56 crc kubenswrapper[4656]: I0128 15:20:56.169891 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:56 crc kubenswrapper[4656]: E0128 15:20:56.170023 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:56 crc kubenswrapper[4656]: E0128 15:20:56.170116 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:56 crc kubenswrapper[4656]: E0128 15:20:56.170201 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:57 crc kubenswrapper[4656]: E0128 15:20:57.063517 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:20:57 crc kubenswrapper[4656]: I0128 15:20:57.170629 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:57 crc kubenswrapper[4656]: E0128 15:20:57.170824 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:20:58 crc kubenswrapper[4656]: I0128 15:20:58.170470 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:20:58 crc kubenswrapper[4656]: I0128 15:20:58.170485 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:20:58 crc kubenswrapper[4656]: I0128 15:20:58.170485 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:20:58 crc kubenswrapper[4656]: E0128 15:20:58.170786 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:20:58 crc kubenswrapper[4656]: E0128 15:20:58.170638 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:20:58 crc kubenswrapper[4656]: E0128 15:20:58.170883 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:20:59 crc kubenswrapper[4656]: I0128 15:20:59.170622 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:20:59 crc kubenswrapper[4656]: E0128 15:20:59.170945 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:00 crc kubenswrapper[4656]: I0128 15:21:00.169960 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:00 crc kubenswrapper[4656]: I0128 15:21:00.169986 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:00 crc kubenswrapper[4656]: I0128 15:21:00.170017 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:00 crc kubenswrapper[4656]: E0128 15:21:00.170107 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:00 crc kubenswrapper[4656]: E0128 15:21:00.170214 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:00 crc kubenswrapper[4656]: E0128 15:21:00.170340 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:01 crc kubenswrapper[4656]: I0128 15:21:01.191706 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:01 crc kubenswrapper[4656]: I0128 15:21:01.191770 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:01 crc kubenswrapper[4656]: E0128 15:21:01.196505 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:01 crc kubenswrapper[4656]: E0128 15:21:01.196670 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:02 crc kubenswrapper[4656]: E0128 15:21:02.065951 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:21:02 crc kubenswrapper[4656]: I0128 15:21:02.170221 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:02 crc kubenswrapper[4656]: I0128 15:21:02.170316 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:02 crc kubenswrapper[4656]: E0128 15:21:02.170395 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:02 crc kubenswrapper[4656]: E0128 15:21:02.170491 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:03 crc kubenswrapper[4656]: I0128 15:21:03.169833 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:03 crc kubenswrapper[4656]: E0128 15:21:03.170016 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:03 crc kubenswrapper[4656]: I0128 15:21:03.170366 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:03 crc kubenswrapper[4656]: E0128 15:21:03.170490 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:04 crc kubenswrapper[4656]: I0128 15:21:04.170505 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:04 crc kubenswrapper[4656]: E0128 15:21:04.170650 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:04 crc kubenswrapper[4656]: I0128 15:21:04.170722 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:04 crc kubenswrapper[4656]: E0128 15:21:04.170910 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:05 crc kubenswrapper[4656]: I0128 15:21:05.170506 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:05 crc kubenswrapper[4656]: E0128 15:21:05.170864 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:05 crc kubenswrapper[4656]: I0128 15:21:05.171577 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:05 crc kubenswrapper[4656]: E0128 15:21:05.171768 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:06 crc kubenswrapper[4656]: I0128 15:21:06.169619 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:06 crc kubenswrapper[4656]: I0128 15:21:06.169670 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:06 crc kubenswrapper[4656]: E0128 15:21:06.169786 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:06 crc kubenswrapper[4656]: E0128 15:21:06.169866 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:07 crc kubenswrapper[4656]: E0128 15:21:07.067672 4656 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.169724 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.169834 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:07 crc kubenswrapper[4656]: E0128 15:21:07.169892 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:07 crc kubenswrapper[4656]: E0128 15:21:07.170086 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.170644 4656 scope.go:117] "RemoveContainer" containerID="c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.596072 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/1.log" Jan 28 15:21:07 crc kubenswrapper[4656]: I0128 15:21:07.596486 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd"} Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.170406 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.170424 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:08 crc kubenswrapper[4656]: E0128 15:21:08.170879 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:08 crc kubenswrapper[4656]: E0128 15:21:08.171031 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.171195 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.602128 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.604448 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerStarted","Data":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.605594 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:21:08 crc kubenswrapper[4656]: I0128 15:21:08.671940 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podStartSLOduration=122.671915666 podStartE2EDuration="2m2.671915666s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:08.667317974 +0000 UTC m=+159.175488778" watchObservedRunningTime="2026-01-28 15:21:08.671915666 +0000 UTC m=+159.180086480" Jan 28 15:21:09 crc kubenswrapper[4656]: I0128 15:21:09.048819 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bmj6r"] Jan 28 15:21:09 crc kubenswrapper[4656]: I0128 15:21:09.049063 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:09 crc kubenswrapper[4656]: E0128 15:21:09.049277 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:09 crc kubenswrapper[4656]: I0128 15:21:09.173562 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:09 crc kubenswrapper[4656]: E0128 15:21:09.173694 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:10 crc kubenswrapper[4656]: I0128 15:21:10.169594 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:10 crc kubenswrapper[4656]: E0128 15:21:10.170032 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:21:10 crc kubenswrapper[4656]: I0128 15:21:10.170421 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:10 crc kubenswrapper[4656]: E0128 15:21:10.170552 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:21:11 crc kubenswrapper[4656]: I0128 15:21:11.170424 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:11 crc kubenswrapper[4656]: I0128 15:21:11.170455 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:11 crc kubenswrapper[4656]: E0128 15:21:11.172848 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:21:11 crc kubenswrapper[4656]: E0128 15:21:11.173086 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bmj6r" podUID="11320542-8463-40db-8981-632be2bd5a48" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.169968 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.170406 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.172803 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.173117 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.173399 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:21:12 crc kubenswrapper[4656]: I0128 15:21:12.173633 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.170432 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.170845 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.173090 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.173264 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.444345 4656 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.499781 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.500521 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.502585 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.502986 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.503868 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.504220 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.504729 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j8tlz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.505487 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.505513 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.505777 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.506138 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.506346 4656 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.506411 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.506603 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.506911 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gcdpp"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.507471 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.508372 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.508717 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.509221 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.509621 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.510354 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.510795 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.517676 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.518124 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zrrnn"] Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.520734 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: secrets "v4-0-config-user-idp-0-file-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.520781 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.520809 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-idp-0-file-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.520819 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.520959 4656 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: configmaps "audit" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.520982 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.521019 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: secrets "v4-0-config-system-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.521040 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.521191 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.527339 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8l46h"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.527591 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.527742 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hvbtc"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.528143 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rmnzt"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.528202 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.528423 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.528618 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.535143 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.535885 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537589 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f905c9bf-de63-4bbb-842b-c5cfb76fec46-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537621 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537643 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537658 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537675 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-policies\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537715 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mzpq\" (UniqueName: \"kubernetes.io/projected/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-kube-api-access-5mzpq\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537742 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-client\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.537878 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538215 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-encryption-config\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538424 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538452 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-serving-cert\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538474 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb55d\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-kube-api-access-sb55d\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538489 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt568\" (UniqueName: \"kubernetes.io/projected/f0d9e967-a840-414e-9ab7-00affd50fec5-kube-api-access-mt568\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538508 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538560 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-dir\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538579 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538626 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0d9e967-a840-414e-9ab7-00affd50fec5-serving-cert\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538656 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0d9e967-a840-414e-9ab7-00affd50fec5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538686 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7gtm\" (UniqueName: \"kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538707 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538724 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538738 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wzmf\" (UniqueName: \"kubernetes.io/projected/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-kube-api-access-6wzmf\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538752 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.538774 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905c9bf-de63-4bbb-842b-c5cfb76fec46-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552216 4656 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552271 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552344 4656 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552361 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552413 4656 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552431 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552488 4656 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552504 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.552554 4656 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.552570 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.562435 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.564230 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.564930 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672121 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njdjz\" (UniqueName: \"kubernetes.io/projected/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-kube-api-access-njdjz\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672174 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-encryption-config\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672222 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672251 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672303 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-audit-dir\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672321 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672336 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672355 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672377 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-serving-cert\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672394 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt568\" (UniqueName: \"kubernetes.io/projected/f0d9e967-a840-414e-9ab7-00affd50fec5-kube-api-access-mt568\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672422 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672438 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-images\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672458 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672476 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-service-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672497 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb55d\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-kube-api-access-sb55d\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672525 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-dir\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672594 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672621 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0d9e967-a840-414e-9ab7-00affd50fec5-serving-cert\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672636 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-image-import-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672655 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672675 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672693 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fd1877-7bc3-4808-8a45-716da7b829e5-serving-cert\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672712 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-metrics-tls\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672732 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-encryption-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmtn\" (UniqueName: \"kubernetes.io/projected/57598ddc-f214-47b1-bdef-10bdf94607d1-kube-api-access-mjmtn\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672770 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0d9e967-a840-414e-9ab7-00affd50fec5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672795 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672818 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqj9\" (UniqueName: \"kubernetes.io/projected/e2fd1877-7bc3-4808-8a45-716da7b829e5-kube-api-access-xkqj9\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672834 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672852 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d96cdec-34f1-44e2-9380-40475a720b31-machine-approver-tls\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672873 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7gtm\" (UniqueName: \"kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672893 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672913 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlqkc\" (UniqueName: \"kubernetes.io/projected/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-kube-api-access-xlqkc\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672929 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-config\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672947 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.672976 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673004 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-serving-cert\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673027 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wzmf\" (UniqueName: \"kubernetes.io/projected/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-kube-api-access-6wzmf\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673064 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-config\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673109 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673134 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673162 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673182 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673228 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905c9bf-de63-4bbb-842b-c5cfb76fec46-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673257 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp7b\" (UniqueName: \"kubernetes.io/projected/d903ef3d-1544-4343-b254-15939a05fec0-kube-api-access-2cp7b\") pod \"downloads-7954f5f757-zrrnn\" (UID: \"d903ef3d-1544-4343-b254-15939a05fec0\") " pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673279 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673300 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlrc9\" (UniqueName: \"kubernetes.io/projected/679bac75-fbd8-4a24-ad40-9c5d10860c90-kube-api-access-jlrc9\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673322 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673373 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krlkb\" (UniqueName: \"kubernetes.io/projected/4d96cdec-34f1-44e2-9380-40475a720b31-kube-api-access-krlkb\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673397 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f905c9bf-de63-4bbb-842b-c5cfb76fec46-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673432 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673453 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673479 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673497 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-serving-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673518 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-audit\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673545 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-client\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673564 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673585 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679bac75-fbd8-4a24-ad40-9c5d10860c90-serving-cert\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673604 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-trusted-ca\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673637 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673664 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673735 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673758 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673783 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-policies\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673808 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673832 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-node-pullsecrets\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673856 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mzpq\" (UniqueName: \"kubernetes.io/projected/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-kube-api-access-5mzpq\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673879 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-client\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673931 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673958 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.673993 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.674014 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.674043 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-config\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.676251 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-dir\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.676821 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.677120 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0d9e967-a840-414e-9ab7-00affd50fec5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.686867 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: secrets "v4-0-config-user-template-error" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.686932 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-error\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.687214 4656 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: secrets "oauth-openshift-dockercfg-znhcc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.687235 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-openshift-dockercfg-znhcc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.689579 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.691524 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.695562 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.695961 4656 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.695998 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.696094 4656 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: secrets "samples-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.696112 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"samples-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.696533 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c8r6q"] Jan 28 15:21:13 crc kubenswrapper[4656]: W0128 15:21:13.696955 4656 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: secrets "v4-0-config-user-template-provider-selection" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 28 15:21:13 crc kubenswrapper[4656]: E0128 15:21:13.696979 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-provider-selection\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.697269 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.699329 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.699535 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.699920 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.700477 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.700926 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.701153 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.701347 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.701490 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.701653 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.711120 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.726414 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.726909 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.727837 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.727963 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728082 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728278 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728409 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.727915 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728306 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.728568 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729104 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729474 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729612 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729819 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.729932 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730067 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730172 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730483 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730599 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730687 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730799 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.730967 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731145 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731458 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731560 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731632 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731696 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731761 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731833 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.731909 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732011 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732092 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732263 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732360 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732468 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732467 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.732991 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733146 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733296 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733443 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733556 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733740 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733849 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.733846 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734177 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734259 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734404 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734808 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.734859 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735382 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735548 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735885 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735976 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.736228 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.736323 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-audit-policies\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.735897 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.737273 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.737766 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.737958 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.738102 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.739492 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.740891 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.740939 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.741907 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.743249 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.744704 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.745117 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.747438 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.747674 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.747994 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0d9e967-a840-414e-9ab7-00affd50fec5-serving-cert\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748256 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748284 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748665 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748806 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748820 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748954 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.748973 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.749287 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.753277 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.755727 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.758380 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.766725 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905c9bf-de63-4bbb-842b-c5cfb76fec46-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.768300 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-etcd-client\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.768586 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-encryption-config\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.768963 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.769891 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.771507 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-serving-cert\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.772213 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.772227 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f905c9bf-de63-4bbb-842b-c5cfb76fec46-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.774337 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.774869 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.775145 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-image-import-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797511 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797567 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797592 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797614 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fd1877-7bc3-4808-8a45-716da7b829e5-serving-cert\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797630 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-metrics-tls\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797754 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-encryption-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797777 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmtn\" (UniqueName: \"kubernetes.io/projected/57598ddc-f214-47b1-bdef-10bdf94607d1-kube-api-access-mjmtn\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797803 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797822 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqj9\" (UniqueName: \"kubernetes.io/projected/e2fd1877-7bc3-4808-8a45-716da7b829e5-kube-api-access-xkqj9\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797850 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.797874 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d96cdec-34f1-44e2-9380-40475a720b31-machine-approver-tls\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798020 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlqkc\" (UniqueName: \"kubernetes.io/projected/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-kube-api-access-xlqkc\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798070 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-config\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798115 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-serving-cert\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798143 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-config\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798186 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798223 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798246 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798264 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798293 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp7b\" (UniqueName: \"kubernetes.io/projected/d903ef3d-1544-4343-b254-15939a05fec0-kube-api-access-2cp7b\") pod \"downloads-7954f5f757-zrrnn\" (UID: \"d903ef3d-1544-4343-b254-15939a05fec0\") " pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798313 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798332 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798353 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlrc9\" (UniqueName: \"kubernetes.io/projected/679bac75-fbd8-4a24-ad40-9c5d10860c90-kube-api-access-jlrc9\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798383 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798406 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krlkb\" (UniqueName: \"kubernetes.io/projected/4d96cdec-34f1-44e2-9380-40475a720b31-kube-api-access-krlkb\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798425 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798451 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798475 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798498 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-serving-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798543 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-audit\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798560 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-client\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798596 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798617 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679bac75-fbd8-4a24-ad40-9c5d10860c90-serving-cert\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798634 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-trusted-ca\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798654 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798674 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798703 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798724 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-node-pullsecrets\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798792 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.798918 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799006 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799032 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799055 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-config\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799076 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njdjz\" (UniqueName: \"kubernetes.io/projected/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-kube-api-access-njdjz\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799131 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799188 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799213 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799247 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-audit-dir\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799273 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799296 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799321 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799363 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-images\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799389 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.799419 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-service-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.776356 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-image-import-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.776615 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.794269 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt568\" (UniqueName: \"kubernetes.io/projected/f0d9e967-a840-414e-9ab7-00affd50fec5-kube-api-access-mt568\") pod \"openshift-config-operator-7777fb866f-lfmv6\" (UID: \"f0d9e967-a840-414e-9ab7-00affd50fec5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.796343 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.794810 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.795049 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.801679 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-config\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.806030 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-service-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.806787 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.807563 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mzpq\" (UniqueName: \"kubernetes.io/projected/a23fe8b6-b461-4abb-ad2a-2bdd501fad81-kube-api-access-5mzpq\") pod \"apiserver-7bbb656c7d-98brw\" (UID: \"a23fe8b6-b461-4abb-ad2a-2bdd501fad81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.808686 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.809487 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-config\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.835200 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d96cdec-34f1-44e2-9380-40475a720b31-machine-approver-tls\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.835602 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.837328 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.837528 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.837817 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.838511 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-encryption-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.838695 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb55d\" (UniqueName: \"kubernetes.io/projected/f905c9bf-de63-4bbb-842b-c5cfb76fec46-kube-api-access-sb55d\") pod \"cluster-image-registry-operator-dc59b4c8b-chdx8\" (UID: \"f905c9bf-de63-4bbb-842b-c5cfb76fec46\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.838812 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.841668 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.843314 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.843859 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-serving-ca\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.844042 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.848699 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.849264 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.850402 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.851150 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.851747 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-config\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.852185 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/679bac75-fbd8-4a24-ad40-9c5d10860c90-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.852286 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-audit-dir\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.853085 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-images\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.853555 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-config\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.854081 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/57598ddc-f214-47b1-bdef-10bdf94607d1-audit\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.854462 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.856807 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57598ddc-f214-47b1-bdef-10bdf94607d1-node-pullsecrets\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.857674 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-metrics-tls\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.857968 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2fd1877-7bc3-4808-8a45-716da7b829e5-trusted-ca\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.859181 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2fd1877-7bc3-4808-8a45-716da7b829e5-serving-cert\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.859777 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-etcd-client\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.860122 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qh5kz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.860879 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.861578 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.861815 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.861820 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.865936 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679bac75-fbd8-4a24-ad40-9c5d10860c90-serving-cert\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.866024 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.866744 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.868466 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.868680 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.874138 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.874873 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.875119 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tvwnv"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.875843 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.876109 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.876390 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.877678 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57598ddc-f214-47b1-bdef-10bdf94607d1-serving-cert\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.881293 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.891409 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.892544 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.893004 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.893745 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.896603 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.898949 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wzmf\" (UniqueName: \"kubernetes.io/projected/deb5ce1c-7cc0-4c71-bb9e-01b4742860a0-kube-api-access-6wzmf\") pod \"openshift-apiserver-operator-796bbdcf4f-2hqzn\" (UID: \"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.900032 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901009 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c96c02d3-be15-47e5-a4bd-e65644751b10-trusted-ca\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901047 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzc7g\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-kube-api-access-rzc7g\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901073 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68e8cbb8-5319-4c56-9636-3bcefa32d29e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901099 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901122 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901146 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e8cbb8-5319-4c56-9636-3bcefa32d29e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901214 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901254 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901311 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901350 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901383 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c96c02d3-be15-47e5-a4bd-e65644751b10-metrics-tls\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901444 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k672\" (UniqueName: \"kubernetes.io/projected/68e8cbb8-5319-4c56-9636-3bcefa32d29e-kube-api-access-5k672\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901525 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.901588 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.902065 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.903402 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.903798 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.904083 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.904306 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.905452 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.905811 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.907819 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.908038 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.908539 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.909235 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.909870 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.910414 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.912618 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.913048 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.914109 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.915442 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.915596 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.917331 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z9pc5"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.917838 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6596q"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.918177 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.918392 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9f8ct"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.918454 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.918759 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.920152 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.920566 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.922315 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gcdpp"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.922502 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.922709 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.923874 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.927092 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8l46h"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.929962 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.929985 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.931147 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j8tlz"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.932350 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.933445 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zrrnn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.934757 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.936016 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sg88v"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.937017 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.937620 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.942680 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.943129 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.945267 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.962474 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tvwnv"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.962530 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.964195 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.965084 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.965403 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jgtrg"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.967227 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.967521 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fzfxb"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.968901 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.972513 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.980471 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh"] Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.984861 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:21:13 crc kubenswrapper[4656]: I0128 15:21:13.999286 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.002387 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c96c02d3-be15-47e5-a4bd-e65644751b10-trusted-ca\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.002437 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzc7g\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-kube-api-access-rzc7g\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.002468 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68e8cbb8-5319-4c56-9636-3bcefa32d29e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.002503 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e8cbb8-5319-4c56-9636-3bcefa32d29e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.003528 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.003584 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c96c02d3-be15-47e5-a4bd-e65644751b10-metrics-tls\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.003649 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k672\" (UniqueName: \"kubernetes.io/projected/68e8cbb8-5319-4c56-9636-3bcefa32d29e-kube-api-access-5k672\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.003730 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.004193 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.004388 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e8cbb8-5319-4c56-9636-3bcefa32d29e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.005500 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hvbtc"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.006827 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.007625 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rmnzt"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.008533 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68e8cbb8-5319-4c56-9636-3bcefa32d29e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.009145 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.009265 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.011384 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.011428 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.012832 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.013727 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c8r6q"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.015397 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.017089 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.024793 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.024843 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fzfxb"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.026333 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.027657 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sg88v"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.030688 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.034746 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jgtrg"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.038842 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6596q"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.041486 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.043241 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z9pc5"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.043790 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.047936 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.065050 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.083661 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.155620 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.157872 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.158117 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.160010 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.171093 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.187025 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.203455 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.211798 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c96c02d3-be15-47e5-a4bd-e65644751b10-metrics-tls\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.241176 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.246664 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c96c02d3-be15-47e5-a4bd-e65644751b10-trusted-ca\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.247028 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.272817 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.298270 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.302422 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.318238 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") pod \"controller-manager-879f6c89f-l99lt\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.356835 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlqkc\" (UniqueName: \"kubernetes.io/projected/6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4-kube-api-access-xlqkc\") pod \"dns-operator-744455d44c-hvbtc\" (UID: \"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.362769 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqj9\" (UniqueName: \"kubernetes.io/projected/e2fd1877-7bc3-4808-8a45-716da7b829e5-kube-api-access-xkqj9\") pod \"console-operator-58897d9998-rmnzt\" (UID: \"e2fd1877-7bc3-4808-8a45-716da7b829e5\") " pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.381578 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmtn\" (UniqueName: \"kubernetes.io/projected/57598ddc-f214-47b1-bdef-10bdf94607d1-kube-api-access-mjmtn\") pod \"apiserver-76f77b778f-j8tlz\" (UID: \"57598ddc-f214-47b1-bdef-10bdf94607d1\") " pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.387225 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.387822 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.404591 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.426341 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.449690 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.449718 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.452918 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.466941 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.470770 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.485531 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.528976 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njdjz\" (UniqueName: \"kubernetes.io/projected/44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9-kube-api-access-njdjz\") pod \"machine-api-operator-5694c8668f-gcdpp\" (UID: \"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.538113 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.548323 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.562753 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krlkb\" (UniqueName: \"kubernetes.io/projected/4d96cdec-34f1-44e2-9380-40475a720b31-kube-api-access-krlkb\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.580354 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlrc9\" (UniqueName: \"kubernetes.io/projected/679bac75-fbd8-4a24-ad40-9c5d10860c90-kube-api-access-jlrc9\") pod \"authentication-operator-69f744f599-8l46h\" (UID: \"679bac75-fbd8-4a24-ad40-9c5d10860c90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.587206 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.603585 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.603680 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp7b\" (UniqueName: \"kubernetes.io/projected/d903ef3d-1544-4343-b254-15939a05fec0-kube-api-access-2cp7b\") pod \"downloads-7954f5f757-zrrnn\" (UID: \"d903ef3d-1544-4343-b254-15939a05fec0\") " pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.623448 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.643883 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.668226 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.676165 4656 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.676323 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls podName:0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.176279493 +0000 UTC m=+165.684450297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-jcx9v" (UID: "0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.792305 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.792564 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.794432 4656 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.794485 4656 projected.go:194] Error preparing data for projected volume kube-api-access-w7gtm for pod openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.794599 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm podName:0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.294574803 +0000 UTC m=+165.802745607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w7gtm" (UniqueName: "kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm") pod "cluster-samples-operator-665b6dd947-jcx9v" (UID: "0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.800944 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.802162 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.30212449 +0000 UTC m=+165.810295314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.802292 4656 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.802333 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config podName:a9d5ce28-bfd3-4a89-9339-e2df3378e9d7 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.302321926 +0000 UTC m=+165.810492730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config") pod "route-controller-manager-6576b87f9c-4twj5" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.802674 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.749466 4656 configmap.go:193] Couldn't get configMap openshift-authentication/audit: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.803897 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.30386857 +0000 UTC m=+165.812039374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.809396 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.809610 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.809728 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.810048 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.813543 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.816673 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.816811 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.316785092 +0000 UTC m=+165.824955896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.819379 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.823286 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.843157 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.851376 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.851479 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.351449068 +0000 UTC m=+165.859619872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.851779 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.851818 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.351807928 +0000 UTC m=+165.859978742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.852114 4656 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.852245 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert podName:a9d5ce28-bfd3-4a89-9339-e2df3378e9d7 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.352184759 +0000 UTC m=+165.860355623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert") pod "route-controller-manager-6576b87f9c-4twj5" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.855319 4656 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.855377 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config podName:4d96cdec-34f1-44e2-9380-40475a720b31 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.35536283 +0000 UTC m=+165.863533674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config") pod "machine-approver-56656f9798-6zb9x" (UID: "4d96cdec-34f1-44e2-9380-40475a720b31") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.855403 4656 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.855432 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error podName:74b5802b-b8fb-48d1-8723-2c78386825db nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.355423832 +0000 UTC m=+165.863594726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error") pod "oauth-openshift-558db77b4-7jpgn" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.857504 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" event={"ID":"f905c9bf-de63-4bbb-842b-c5cfb76fec46","Type":"ContainerStarted","Data":"adf139bd697bec0674abe58031b80d437f7f4d3a3679e017234792fa60a606ba"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.857551 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" event={"ID":"f905c9bf-de63-4bbb-842b-c5cfb76fec46","Type":"ContainerStarted","Data":"011c8afe04db65fbf680bb337a4fd0ebe7f1f4afb95b9916dd7c58091c717b26"} Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.857943 4656 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: E0128 15:21:14.858035 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca podName:a9d5ce28-bfd3-4a89-9339-e2df3378e9d7 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:15.358010236 +0000 UTC m=+165.866181110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca") pod "route-controller-manager-6576b87f9c-4twj5" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.863885 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.882366 4656 request.go:700] Waited for 1.015349101s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0 Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.882765 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" event={"ID":"a23fe8b6-b461-4abb-ad2a-2bdd501fad81","Type":"ContainerStarted","Data":"47d6029e172ca163a5c77a05a8c8e3969c6c35e6abe2418dde1d0845ef20182a"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.888603 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.904992 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.905732 4656 generic.go:334] "Generic (PLEG): container finished" podID="f0d9e967-a840-414e-9ab7-00affd50fec5" containerID="68c5cefbc1f9da95ec54c3f2f1119adf7e7b1105d458136dd149459d12044eea" exitCode=0 Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.907374 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" event={"ID":"f0d9e967-a840-414e-9ab7-00affd50fec5","Type":"ContainerDied","Data":"68c5cefbc1f9da95ec54c3f2f1119adf7e7b1105d458136dd149459d12044eea"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.924209 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.932214 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j8tlz"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.932274 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" event={"ID":"f0d9e967-a840-414e-9ab7-00affd50fec5","Type":"ContainerStarted","Data":"17a56a7b997ab4d612373861e24a4dab84680e21263fe8eff34d95c613acbda5"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.932319 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" event={"ID":"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0","Type":"ContainerStarted","Data":"bb1ad2570d532bbc4a7045efd967d8de033a0a667b76b021f4fe18bd23f293e4"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.932345 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" event={"ID":"deb5ce1c-7cc0-4c71-bb9e-01b4742860a0","Type":"ContainerStarted","Data":"3a349004818c7610c283987546efff6a56fda5a459d8905902cba0e208d7ce83"} Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.937379 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hvbtc"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.937427 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.948313 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:21:14 crc kubenswrapper[4656]: W0128 15:21:14.950835 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57598ddc_f214_47b1_bdef_10bdf94607d1.slice/crio-bfe1ae8bf9d64e2c7b7ade30ca6cb508be53e47c35c373b451803d08422d9897 WatchSource:0}: Error finding container bfe1ae8bf9d64e2c7b7ade30ca6cb508be53e47c35c373b451803d08422d9897: Status 404 returned error can't find the container with id bfe1ae8bf9d64e2c7b7ade30ca6cb508be53e47c35c373b451803d08422d9897 Jan 28 15:21:14 crc kubenswrapper[4656]: W0128 15:21:14.957670 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97f85e75_6682_490f_9f1d_cdf924a67f38.slice/crio-c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c WatchSource:0}: Error finding container c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c: Status 404 returned error can't find the container with id c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.963533 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:21:14 crc kubenswrapper[4656]: W0128 15:21:14.967392 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e4f467e_8a0b_4e05_a393_3dcc06f5b8a4.slice/crio-7c582cab7e76e09a74755f708ec7512725851b77adaf41dbdd60dffa45f19603 WatchSource:0}: Error finding container 7c582cab7e76e09a74755f708ec7512725851b77adaf41dbdd60dffa45f19603: Status 404 returned error can't find the container with id 7c582cab7e76e09a74755f708ec7512725851b77adaf41dbdd60dffa45f19603 Jan 28 15:21:14 crc kubenswrapper[4656]: I0128 15:21:14.983831 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.008178 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.024567 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.044479 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.068218 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.084360 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.107324 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.122607 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.143685 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.164068 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.183486 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.189092 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8l46h"] Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.210118 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.210526 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zrrnn"] Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.231273 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") pod \"console-f9d7485db-jrkdc\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.245825 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.263173 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.283156 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.308688 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.312814 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7gtm\" (UniqueName: \"kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.312911 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.313057 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.313142 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.390778 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.391381 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.393111 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rmnzt"] Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.393267 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.400070 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.408178 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.413986 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414049 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414077 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414225 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414247 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414266 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.414295 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:15 crc kubenswrapper[4656]: W0128 15:21:15.415071 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2fd1877_7bc3_4808_8a45_716da7b829e5.slice/crio-7fde1bf31974dffc7925336c520693e3a96c2d054c75849032b196bf5246ab0b WatchSource:0}: Error finding container 7fde1bf31974dffc7925336c520693e3a96c2d054c75849032b196bf5246ab0b: Status 404 returned error can't find the container with id 7fde1bf31974dffc7925336c520693e3a96c2d054c75849032b196bf5246ab0b Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.415284 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gcdpp"] Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.424576 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.443821 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.465148 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.477102 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.482497 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.504160 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.523324 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:21:15 crc kubenswrapper[4656]: E0128 15:21:15.538452 4656 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:15 crc kubenswrapper[4656]: E0128 15:21:15.538531 4656 projected.go:194] Error preparing data for projected volume kube-api-access-d7qd6 for pod openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:15 crc kubenswrapper[4656]: E0128 15:21:15.538610 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6 podName:a9d5ce28-bfd3-4a89-9339-e2df3378e9d7 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.038584867 +0000 UTC m=+166.546755671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d7qd6" (UniqueName: "kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6") pod "route-controller-manager-6576b87f9c-4twj5" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.544268 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.563656 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.585379 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.602931 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.628311 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.644144 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.665046 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.671476 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:21:15 crc kubenswrapper[4656]: W0128 15:21:15.681405 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacd5c0d8_8e06_4dfe_9e89_fd89b194ec1f.slice/crio-c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017 WatchSource:0}: Error finding container c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017: Status 404 returned error can't find the container with id c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017 Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.683635 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.704454 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.727001 4656 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.744023 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.764676 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.783317 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.803612 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.822917 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.843696 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.863421 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.884068 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.902066 4656 request.go:700] Waited for 1.899246885s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.924140 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzc7g\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-kube-api-access-rzc7g\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.931180 4656 generic.go:334] "Generic (PLEG): container finished" podID="a23fe8b6-b461-4abb-ad2a-2bdd501fad81" containerID="58e486120f1f647b424606ea52dd79bd76d1d9378d9c0c2886828037842d10c9" exitCode=0 Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.931337 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" event={"ID":"a23fe8b6-b461-4abb-ad2a-2bdd501fad81","Type":"ContainerDied","Data":"58e486120f1f647b424606ea52dd79bd76d1d9378d9c0c2886828037842d10c9"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.935499 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" event={"ID":"f0d9e967-a840-414e-9ab7-00affd50fec5","Type":"ContainerStarted","Data":"ed1a824de788fff7cfd939d149b3a63fd7c080220f0a8764d97b6d9e277a9b87"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.935671 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.941839 4656 generic.go:334] "Generic (PLEG): container finished" podID="57598ddc-f214-47b1-bdef-10bdf94607d1" containerID="27e7f76b68c2bbef12bdeffbe31575151e8edb14b989f7699ace53dac1631768" exitCode=0 Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.941919 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" event={"ID":"57598ddc-f214-47b1-bdef-10bdf94607d1","Type":"ContainerDied","Data":"27e7f76b68c2bbef12bdeffbe31575151e8edb14b989f7699ace53dac1631768"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.941955 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" event={"ID":"57598ddc-f214-47b1-bdef-10bdf94607d1","Type":"ContainerStarted","Data":"bfe1ae8bf9d64e2c7b7ade30ca6cb508be53e47c35c373b451803d08422d9897"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.943490 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k672\" (UniqueName: \"kubernetes.io/projected/68e8cbb8-5319-4c56-9636-3bcefa32d29e-kube-api-access-5k672\") pod \"openshift-controller-manager-operator-756b6f6bc6-bbdhj\" (UID: \"68e8cbb8-5319-4c56-9636-3bcefa32d29e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.947404 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jrkdc" event={"ID":"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f","Type":"ContainerStarted","Data":"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.947459 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jrkdc" event={"ID":"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f","Type":"ContainerStarted","Data":"c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.951268 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" event={"ID":"679bac75-fbd8-4a24-ad40-9c5d10860c90","Type":"ContainerStarted","Data":"f2c476200ac3fad60f27013e1ecb1ed0845b932ae4f0efae3bf6f9dd0eb28a5b"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.951325 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" event={"ID":"679bac75-fbd8-4a24-ad40-9c5d10860c90","Type":"ContainerStarted","Data":"e4efb46b242bf0aceff6dfd94d827dba0c1e85e29b1cb80d05ed486af5952d2b"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.957673 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" event={"ID":"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4","Type":"ContainerStarted","Data":"0284e1fff81c3dc906a1db43941a3d4c409b0ab683203b8c6b946291e1751ef1"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.957722 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" event={"ID":"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4","Type":"ContainerStarted","Data":"445c1ec0ef703c76041530865f710373188bc739c7df2f5d1fe1bc8b800e07cf"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.957734 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" event={"ID":"6e4f467e-8a0b-4e05-a393-3dcc06f5b8a4","Type":"ContainerStarted","Data":"7c582cab7e76e09a74755f708ec7512725851b77adaf41dbdd60dffa45f19603"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.959873 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" event={"ID":"97f85e75-6682-490f-9f1d-cdf924a67f38","Type":"ContainerStarted","Data":"f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.959902 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" event={"ID":"97f85e75-6682-490f-9f1d-cdf924a67f38","Type":"ContainerStarted","Data":"c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.960574 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.962133 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" event={"ID":"e2fd1877-7bc3-4808-8a45-716da7b829e5","Type":"ContainerStarted","Data":"7a1663e370e51c8d4426eaedc32fd875143ba194d4621648d85c9defc7de549d"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.962193 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" event={"ID":"e2fd1877-7bc3-4808-8a45-716da7b829e5","Type":"ContainerStarted","Data":"7fde1bf31974dffc7925336c520693e3a96c2d054c75849032b196bf5246ab0b"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.962678 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.964147 4656 patch_prober.go:28] interesting pod/console-operator-58897d9998-rmnzt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.964234 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" podUID="e2fd1877-7bc3-4808-8a45-716da7b829e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.964717 4656 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l99lt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.964751 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.967076 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" event={"ID":"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9","Type":"ContainerStarted","Data":"d7c345022a99590d867022d672af1c88b341e1164a6513b904b3a6971299921f"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.967120 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" event={"ID":"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9","Type":"ContainerStarted","Data":"d35c67a4b06b0313ea9859c580860672c985d6796a0bdd76aa918e04c4a93300"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.967133 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" event={"ID":"44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9","Type":"ContainerStarted","Data":"5326e46627dc203978818d798dd745a3c2c2102f70c971fda55c35276af43f67"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.968472 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c96c02d3-be15-47e5-a4bd-e65644751b10-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vqcvt\" (UID: \"c96c02d3-be15-47e5-a4bd-e65644751b10\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.973437 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrrnn" event={"ID":"d903ef3d-1544-4343-b254-15939a05fec0","Type":"ContainerStarted","Data":"60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.973911 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.974000 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrrnn" event={"ID":"d903ef3d-1544-4343-b254-15939a05fec0","Type":"ContainerStarted","Data":"73be6363650d9e30dc18d3355d038b1a30ec5a219ce906d880234a62de3b63b0"} Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.974679 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.974814 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.985393 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:21:15 crc kubenswrapper[4656]: I0128 15:21:15.995351 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d96cdec-34f1-44e2-9380-40475a720b31-auth-proxy-config\") pod \"machine-approver-56656f9798-6zb9x\" (UID: \"4d96cdec-34f1-44e2-9380-40475a720b31\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.003389 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.010545 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028091 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028229 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8947t\" (UniqueName: \"kubernetes.io/projected/8be343e6-2e63-426f-95cc-06f64f7417cc-kube-api-access-8947t\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028317 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-client\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028382 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be343e6-2e63-426f-95cc-06f64f7417cc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028419 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-service-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028494 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be343e6-2e63-426f-95cc-06f64f7417cc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028522 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.028538 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.031055 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.031147 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-serving-cert\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.031268 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032016 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032152 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032320 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032415 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-config\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032656 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7f9j\" (UniqueName: \"kubernetes.io/projected/c7db3547-02a4-4214-ad16-0b513f48b6d7-kube-api-access-k7f9j\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.032884 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.033295 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.533278674 +0000 UTC m=+167.041449568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.033633 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.036698 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.047537 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.063875 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.085886 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.106169 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.107894 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.117555 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.123010 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134393 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134685 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134923 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-service-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134966 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk6ws\" (UniqueName: \"kubernetes.io/projected/585af9c8-b122-49d3-8640-3dc5fb1613ab-kube-api-access-pk6ws\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.134994 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-mountpoint-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135021 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be343e6-2e63-426f-95cc-06f64f7417cc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135044 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9f9g\" (UniqueName: \"kubernetes.io/projected/e4b1709f-307f-47d0-8648-125e2514c80e-kube-api-access-j9f9g\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135069 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-apiservice-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135091 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-plugins-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135113 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135131 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135152 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-registration-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135173 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-certs\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135209 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2klt7\" (UniqueName: \"kubernetes.io/projected/4c5b7670-d8d9-4ea1-822d-788709c62ee5-kube-api-access-2klt7\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135254 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-profile-collector-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135277 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9360e3-7265-42ec-b104-d62ab6ec66f4-serving-cert\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135300 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgh67\" (UniqueName: \"kubernetes.io/projected/50ad627b-637f-4763-96c1-4c1beb352c70-kube-api-access-zgh67\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135324 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5b6\" (UniqueName: \"kubernetes.io/projected/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-kube-api-access-gk5b6\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135373 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-config\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135396 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135421 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-metrics-tls\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135443 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gct8j\" (UniqueName: \"kubernetes.io/projected/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-kube-api-access-gct8j\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.135484 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.635449221 +0000 UTC m=+167.143620025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135571 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135610 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-default-certificate\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135638 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390f2e18-1f51-46da-93cf-da6b0d524b0d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135672 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135742 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c50cc4de-dd25-4337-a532-3384d5a87626-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135800 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7f9j\" (UniqueName: \"kubernetes.io/projected/c7db3547-02a4-4214-ad16-0b513f48b6d7-kube-api-access-k7f9j\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135829 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6w9g\" (UniqueName: \"kubernetes.io/projected/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-kube-api-access-b6w9g\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135854 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-stats-auth\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135886 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135910 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-tmpfs\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135936 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6v7n\" (UniqueName: \"kubernetes.io/projected/39d8752e-2237-4115-b66c-a9afc736dffe-kube-api-access-d6v7n\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.135975 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-node-bootstrap-token\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136003 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-cert\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136050 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-config-volume\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136071 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390f2e18-1f51-46da-93cf-da6b0d524b0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136094 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-service-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136101 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136129 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkd52\" (UniqueName: \"kubernetes.io/projected/d05cf23a-0ecb-4cd3-bafe-0fd7d930d916-kube-api-access-zkd52\") pod \"migrator-59844c95c7-xlcjq\" (UID: \"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136154 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136176 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n94ds\" (UniqueName: \"kubernetes.io/projected/01e19302-0470-49dd-88d5-9a568e820278-kube-api-access-n94ds\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136212 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136231 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz67g\" (UniqueName: \"kubernetes.io/projected/7f9360e3-7265-42ec-b104-d62ab6ec66f4-kube-api-access-lz67g\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136241 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be343e6-2e63-426f-95cc-06f64f7417cc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136336 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136358 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-client\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136377 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136393 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-webhook-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136427 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8lzx\" (UniqueName: \"kubernetes.io/projected/c50cc4de-dd25-4337-a532-3384d5a87626-kube-api-access-g8lzx\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136474 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8tb7\" (UniqueName: \"kubernetes.io/projected/d8d474a0-0bf1-467d-ab77-4b94a17f7881-kube-api-access-n8tb7\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136495 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-socket-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136511 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01e19302-0470-49dd-88d5-9a568e820278-service-ca-bundle\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136527 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/50ad627b-637f-4763-96c1-4c1beb352c70-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136580 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-config\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136599 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39d8752e-2237-4115-b66c-a9afc736dffe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136620 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtg6z\" (UniqueName: \"kubernetes.io/projected/854de0fc-bfab-4d4d-9931-b84561234f71-kube-api-access-rtg6z\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136638 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-cabundle\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136655 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9360e3-7265-42ec-b104-d62ab6ec66f4-config\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136704 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-serving-cert\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136718 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-srv-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136751 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27whz\" (UniqueName: \"kubernetes.io/projected/3a1daea1-c036-4141-95dc-ce3567519970-kube-api-access-27whz\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136780 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136827 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136844 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-config\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136868 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-metrics-certs\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136883 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8d474a0-0bf1-467d-ab77-4b94a17f7881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136903 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx79z\" (UniqueName: \"kubernetes.io/projected/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-kube-api-access-wx79z\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136937 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/585af9c8-b122-49d3-8640-3dc5fb1613ab-proxy-tls\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136951 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136980 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.136995 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-images\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137011 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137027 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-profile-collector-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137043 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39d8752e-2237-4115-b66c-a9afc736dffe-proxy-tls\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137044 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137116 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137135 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137153 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137180 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-srv-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137224 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-key\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137256 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137274 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137288 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-csi-data-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137323 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/390f2e18-1f51-46da-93cf-da6b0d524b0d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137351 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8947t\" (UniqueName: \"kubernetes.io/projected/8be343e6-2e63-426f-95cc-06f64f7417cc-kube-api-access-8947t\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137389 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be343e6-2e63-426f-95cc-06f64f7417cc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.137838 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-ca\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.138297 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.139429 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.141777 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.146432 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be343e6-2e63-426f-95cc-06f64f7417cc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.147025 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.147533 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7db3547-02a4-4214-ad16-0b513f48b6d7-config\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.147851 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.647835677 +0000 UTC m=+167.156006481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.148836 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-etcd-client\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.155906 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.157711 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7db3547-02a4-4214-ad16-0b513f48b6d7-serving-cert\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.164361 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.167443 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.186743 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.187310 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.187823 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.190594 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.199278 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.207624 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.211558 4656 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.211706 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls podName:0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9 nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.211674352 +0000 UTC m=+167.719845146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-jcx9v" (UID: "0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.220567 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jpgn\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.224984 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.225052 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239107 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.239296 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.739266525 +0000 UTC m=+167.247437339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239359 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-default-certificate\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239410 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390f2e18-1f51-46da-93cf-da6b0d524b0d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239436 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239659 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6w9g\" (UniqueName: \"kubernetes.io/projected/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-kube-api-access-b6w9g\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239694 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c50cc4de-dd25-4337-a532-3384d5a87626-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239753 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-stats-auth\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239779 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-tmpfs\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239844 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6v7n\" (UniqueName: \"kubernetes.io/projected/39d8752e-2237-4115-b66c-a9afc736dffe-kube-api-access-d6v7n\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239876 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-node-bootstrap-token\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.239933 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-cert\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240275 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-config-volume\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240303 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390f2e18-1f51-46da-93cf-da6b0d524b0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240370 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkd52\" (UniqueName: \"kubernetes.io/projected/d05cf23a-0ecb-4cd3-bafe-0fd7d930d916-kube-api-access-zkd52\") pod \"migrator-59844c95c7-xlcjq\" (UID: \"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240433 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240462 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n94ds\" (UniqueName: \"kubernetes.io/projected/01e19302-0470-49dd-88d5-9a568e820278-kube-api-access-n94ds\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240521 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240551 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz67g\" (UniqueName: \"kubernetes.io/projected/7f9360e3-7265-42ec-b104-d62ab6ec66f4-kube-api-access-lz67g\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240614 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240643 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240696 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-webhook-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240721 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8lzx\" (UniqueName: \"kubernetes.io/projected/c50cc4de-dd25-4337-a532-3384d5a87626-kube-api-access-g8lzx\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240773 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8tb7\" (UniqueName: \"kubernetes.io/projected/d8d474a0-0bf1-467d-ab77-4b94a17f7881-kube-api-access-n8tb7\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240801 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-socket-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240823 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/50ad627b-637f-4763-96c1-4c1beb352c70-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240879 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01e19302-0470-49dd-88d5-9a568e820278-service-ca-bundle\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240942 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-config\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.240976 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39d8752e-2237-4115-b66c-a9afc736dffe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtg6z\" (UniqueName: \"kubernetes.io/projected/854de0fc-bfab-4d4d-9931-b84561234f71-kube-api-access-rtg6z\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241097 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9360e3-7265-42ec-b104-d62ab6ec66f4-config\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241134 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-cabundle\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241193 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-srv-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241236 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27whz\" (UniqueName: \"kubernetes.io/projected/3a1daea1-c036-4141-95dc-ce3567519970-kube-api-access-27whz\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241270 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241362 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-metrics-certs\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241389 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8d474a0-0bf1-467d-ab77-4b94a17f7881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241443 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx79z\" (UniqueName: \"kubernetes.io/projected/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-kube-api-access-wx79z\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241677 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/585af9c8-b122-49d3-8640-3dc5fb1613ab-proxy-tls\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241708 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241785 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241840 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-images\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241909 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241935 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-profile-collector-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241957 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39d8752e-2237-4115-b66c-a9afc736dffe-proxy-tls\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.241994 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242475 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-srv-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242553 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242578 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-key\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242601 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242651 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-csi-data-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242685 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/390f2e18-1f51-46da-93cf-da6b0d524b0d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242774 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk6ws\" (UniqueName: \"kubernetes.io/projected/585af9c8-b122-49d3-8640-3dc5fb1613ab-kube-api-access-pk6ws\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242815 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-mountpoint-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242842 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9f9g\" (UniqueName: \"kubernetes.io/projected/e4b1709f-307f-47d0-8648-125e2514c80e-kube-api-access-j9f9g\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242871 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-apiservice-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242897 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-registration-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242921 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-plugins-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242946 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2klt7\" (UniqueName: \"kubernetes.io/projected/4c5b7670-d8d9-4ea1-822d-788709c62ee5-kube-api-access-2klt7\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.242996 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-certs\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243022 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-profile-collector-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9360e3-7265-42ec-b104-d62ab6ec66f4-serving-cert\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243358 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk5b6\" (UniqueName: \"kubernetes.io/projected/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-kube-api-access-gk5b6\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243413 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgh67\" (UniqueName: \"kubernetes.io/projected/50ad627b-637f-4763-96c1-4c1beb352c70-kube-api-access-zgh67\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243456 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-config\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243888 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-metrics-tls\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.243924 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gct8j\" (UniqueName: \"kubernetes.io/projected/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-kube-api-access-gct8j\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.244377 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-mountpoint-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.246545 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-images\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.250748 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-registration-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.250858 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-plugins-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.252655 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.253524 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c50cc4de-dd25-4337-a532-3384d5a87626-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.254416 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390f2e18-1f51-46da-93cf-da6b0d524b0d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.255123 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.255270 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-csi-data-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.255698 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-tmpfs\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.256921 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-default-certificate\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.257488 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390f2e18-1f51-46da-93cf-da6b0d524b0d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.259070 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-config\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.260690 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-profile-collector-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.260788 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-key\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.261353 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-webhook-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.261713 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-certs\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.261747 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"route-controller-manager-6576b87f9c-4twj5\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262052 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/585af9c8-b122-49d3-8640-3dc5fb1613ab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262075 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-profile-collector-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262420 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f9360e3-7265-42ec-b104-d62ab6ec66f4-serving-cert\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262666 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-cert\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.262921 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-config-volume\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.263762 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.763745318 +0000 UTC m=+167.271916172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.264039 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01e19302-0470-49dd-88d5-9a568e820278-service-ca-bundle\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.264107 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9360e3-7265-42ec-b104-d62ab6ec66f4-config\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.264272 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c5b7670-d8d9-4ea1-822d-788709c62ee5-socket-dir\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.264680 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-signing-cabundle\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.274476 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-config\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.275062 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/854de0fc-bfab-4d4d-9931-b84561234f71-srv-cert\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.276821 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.277082 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a1daea1-c036-4141-95dc-ce3567519970-node-bootstrap-token\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.279083 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39d8752e-2237-4115-b66c-a9afc736dffe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.280717 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.281616 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-metrics-tls\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.283200 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8d474a0-0bf1-467d-ab77-4b94a17f7881-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.283722 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-apiservice-cert\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.283921 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.285608 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.287464 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/585af9c8-b122-49d3-8640-3dc5fb1613ab-proxy-tls\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.287981 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39d8752e-2237-4115-b66c-a9afc736dffe-proxy-tls\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.287991 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.288364 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-metrics-certs\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.289901 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.290727 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.291093 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01e19302-0470-49dd-88d5-9a568e820278-stats-auth\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.291790 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e4b1709f-307f-47d0-8648-125e2514c80e-srv-cert\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.292706 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7gtm\" (UniqueName: \"kubernetes.io/projected/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-kube-api-access-w7gtm\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.296098 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/50ad627b-637f-4763-96c1-4c1beb352c70-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.335891 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7f9j\" (UniqueName: \"kubernetes.io/projected/c7db3547-02a4-4214-ad16-0b513f48b6d7-kube-api-access-k7f9j\") pod \"etcd-operator-b45778765-c8r6q\" (UID: \"c7db3547-02a4-4214-ad16-0b513f48b6d7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.345350 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.347092 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.847071853 +0000 UTC m=+167.355242657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.349672 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.349895 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.350411 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.850362878 +0000 UTC m=+167.358533682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.363329 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.397497 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.400850 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8947t\" (UniqueName: \"kubernetes.io/projected/8be343e6-2e63-426f-95cc-06f64f7417cc-kube-api-access-8947t\") pod \"kube-storage-version-migrator-operator-b67b599dd-mzh45\" (UID: \"8be343e6-2e63-426f-95cc-06f64f7417cc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.403569 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") pod \"marketplace-operator-79b997595-66pz7\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.427022 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6w9g\" (UniqueName: \"kubernetes.io/projected/3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f-kube-api-access-b6w9g\") pod \"ingress-canary-fzfxb\" (UID: \"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f\") " pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.441924 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.456532 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gct8j\" (UniqueName: \"kubernetes.io/projected/51c23ee3-ca6a-4660-ab9f-84c7d70b7a30-kube-api-access-gct8j\") pod \"service-ca-9c57cc56f-z9pc5\" (UID: \"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30\") " pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.459700 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.460419 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:16.9603944 +0000 UTC m=+167.468565214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.474039 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk6ws\" (UniqueName: \"kubernetes.io/projected/585af9c8-b122-49d3-8640-3dc5fb1613ab-kube-api-access-pk6ws\") pod \"machine-config-operator-74547568cd-vdplj\" (UID: \"585af9c8-b122-49d3-8640-3dc5fb1613ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.487437 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/390f2e18-1f51-46da-93cf-da6b0d524b0d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-62tcz\" (UID: \"390f2e18-1f51-46da-93cf-da6b0d524b0d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.519949 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9f9g\" (UniqueName: \"kubernetes.io/projected/e4b1709f-307f-47d0-8648-125e2514c80e-kube-api-access-j9f9g\") pod \"catalog-operator-68c6474976-8qvpr\" (UID: \"e4b1709f-307f-47d0-8648-125e2514c80e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.520996 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt"] Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.538875 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.549213 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx79z\" (UniqueName: \"kubernetes.io/projected/0f9c8e83-6ca2-4b24-9f50-4d7d48af3938-kube-api-access-wx79z\") pod \"dns-default-jgtrg\" (UID: \"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938\") " pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.562560 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.562969 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.062951338 +0000 UTC m=+167.571122142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.564948 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2klt7\" (UniqueName: \"kubernetes.io/projected/4c5b7670-d8d9-4ea1-822d-788709c62ee5-kube-api-access-2klt7\") pod \"csi-hostpathplugin-sg88v\" (UID: \"4c5b7670-d8d9-4ea1-822d-788709c62ee5\") " pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.569728 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") pod \"collect-profiles-29493555-gvkjl\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.570549 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fzfxb" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.588571 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj"] Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.607686 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.610523 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkd52\" (UniqueName: \"kubernetes.io/projected/d05cf23a-0ecb-4cd3-bafe-0fd7d930d916-kube-api-access-zkd52\") pod \"migrator-59844c95c7-xlcjq\" (UID: \"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.614228 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6v7n\" (UniqueName: \"kubernetes.io/projected/39d8752e-2237-4115-b66c-a9afc736dffe-kube-api-access-d6v7n\") pod \"machine-config-controller-84d6567774-xczjn\" (UID: \"39d8752e-2237-4115-b66c-a9afc736dffe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.623344 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk5b6\" (UniqueName: \"kubernetes.io/projected/ec3667a0-3a61-42f4-85cd-d9e7eb774fd4-kube-api-access-gk5b6\") pod \"packageserver-d55dfcdfc-9hdsz\" (UID: \"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.644195 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.652239 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgh67\" (UniqueName: \"kubernetes.io/projected/50ad627b-637f-4763-96c1-4c1beb352c70-kube-api-access-zgh67\") pod \"package-server-manager-789f6589d5-wh267\" (UID: \"50ad627b-637f-4763-96c1-4c1beb352c70\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.659446 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.669746 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a08f3bdf-18fe-4bc2-aee2-10a3dec3f428-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dxhzs\" (UID: \"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.670310 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.670496 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.170464678 +0000 UTC m=+167.678635482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.670673 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.671249 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.17123939 +0000 UTC m=+167.679410194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.693340 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.695690 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz67g\" (UniqueName: \"kubernetes.io/projected/7f9360e3-7265-42ec-b104-d62ab6ec66f4-kube-api-access-lz67g\") pod \"service-ca-operator-777779d784-6596q\" (UID: \"7f9360e3-7265-42ec-b104-d62ab6ec66f4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.714587 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.716137 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.721982 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtg6z\" (UniqueName: \"kubernetes.io/projected/854de0fc-bfab-4d4d-9931-b84561234f71-kube-api-access-rtg6z\") pod \"olm-operator-6b444d44fb-879dh\" (UID: \"854de0fc-bfab-4d4d-9931-b84561234f71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.724714 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.736423 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.769565 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.769684 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.773415 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e69663d4-ff3d-4991-804a-cf8d53a4c3ff-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rm7ls\" (UID: \"e69663d4-ff3d-4991-804a-cf8d53a4c3ff\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.774275 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.774560 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.274542579 +0000 UTC m=+167.782713373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.774643 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.775021 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.275014322 +0000 UTC m=+167.783185126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.784896 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.786827 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27whz\" (UniqueName: \"kubernetes.io/projected/3a1daea1-c036-4141-95dc-ce3567519970-kube-api-access-27whz\") pod \"machine-config-server-9f8ct\" (UID: \"3a1daea1-c036-4141-95dc-ce3567519970\") " pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.787484 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8lzx\" (UniqueName: \"kubernetes.io/projected/c50cc4de-dd25-4337-a532-3384d5a87626-kube-api-access-g8lzx\") pod \"control-plane-machine-set-operator-78cbb6b69f-w7bws\" (UID: \"c50cc4de-dd25-4337-a532-3384d5a87626\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.802740 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.820005 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8tb7\" (UniqueName: \"kubernetes.io/projected/d8d474a0-0bf1-467d-ab77-4b94a17f7881-kube-api-access-n8tb7\") pod \"multus-admission-controller-857f4d67dd-tvwnv\" (UID: \"d8d474a0-0bf1-467d-ab77-4b94a17f7881\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.822084 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n94ds\" (UniqueName: \"kubernetes.io/projected/01e19302-0470-49dd-88d5-9a568e820278-kube-api-access-n94ds\") pod \"router-default-5444994796-qh5kz\" (UID: \"01e19302-0470-49dd-88d5-9a568e820278\") " pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.846740 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.871775 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.875443 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.875630 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.876011 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.375991544 +0000 UTC m=+167.884162348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.893051 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.893527 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.918128 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.953495 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.972794 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:16 crc kubenswrapper[4656]: I0128 15:21:16.977874 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:16 crc kubenswrapper[4656]: E0128 15:21:16.992708 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.478915683 +0000 UTC m=+167.987086487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: W0128 15:21:17.011328 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74b5802b_b8fb_48d1_8723_2c78386825db.slice/crio-4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc WatchSource:0}: Error finding container 4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc: Status 404 returned error can't find the container with id 4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.018954 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" event={"ID":"57598ddc-f214-47b1-bdef-10bdf94607d1","Type":"ContainerStarted","Data":"c1c25d7a849348e00e28466a6d1954013a10df1897f625e7de1e4191f6e3557c"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.029311 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" event={"ID":"c96c02d3-be15-47e5-a4bd-e65644751b10","Type":"ContainerStarted","Data":"0c93c6b444188b16eb93563c92bda3521517c46f5314585cf2e6a0e50fe992f9"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.031224 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c8r6q"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.053977 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9f8ct" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.055588 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" event={"ID":"a23fe8b6-b461-4abb-ad2a-2bdd501fad81","Type":"ContainerStarted","Data":"cec9e132b7774af4147a58a97483917bc89d2a629e7e9e677a2a5b66735ff993"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.060893 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" event={"ID":"4d96cdec-34f1-44e2-9380-40475a720b31","Type":"ContainerStarted","Data":"8292a5eabf703093297aeb509ba073fd967ad8381efe5eaa03520355550b90f2"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.063820 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" event={"ID":"68e8cbb8-5319-4c56-9636-3bcefa32d29e","Type":"ContainerStarted","Data":"67b131c844d27b471e69f8197dfc1211746c8dba7c1efa0fb6a1053449dac561"} Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.066553 4656 patch_prober.go:28] interesting pod/console-operator-58897d9998-rmnzt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.066592 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" podUID="e2fd1877-7bc3-4808-8a45-716da7b829e5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.076107 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.076151 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.085775 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.087200 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.587180664 +0000 UTC m=+168.095351468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.089156 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.187993 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.200515 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.700494791 +0000 UTC m=+168.208665595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.282834 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.289251 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.289507 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.292015 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.79198105 +0000 UTC m=+168.300151864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.313598 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jcx9v\" (UID: \"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.397024 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.397825 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:17.897807652 +0000 UTC m=+168.405978466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.407268 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.451364 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.500946 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.501206 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.001186823 +0000 UTC m=+168.509357627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.501270 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.501663 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.001647466 +0000 UTC m=+168.509818270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.562614 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fzfxb"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.606495 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.606893 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.106877671 +0000 UTC m=+168.615048475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.657614 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.708990 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.709433 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.209414008 +0000 UTC m=+168.717584812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.811645 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.812221 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.312205412 +0000 UTC m=+168.820376216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.918361 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:17 crc kubenswrapper[4656]: E0128 15:21:17.918786 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.418766674 +0000 UTC m=+168.926937478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:17 crc kubenswrapper[4656]: W0128 15:21:17.920798 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a1daea1_c036_4141_95dc_ce3567519970.slice/crio-148ee8da7835b5a10fc7953862e8b5815bc845e7df914412c85063ab85c35985 WatchSource:0}: Error finding container 148ee8da7835b5a10fc7953862e8b5815bc845e7df914412c85063ab85c35985: Status 404 returned error can't find the container with id 148ee8da7835b5a10fc7953862e8b5815bc845e7df914412c85063ab85c35985 Jan 28 15:21:17 crc kubenswrapper[4656]: I0128 15:21:17.971803 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz"] Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.037834 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.038359 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.538334861 +0000 UTC m=+169.046505665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.148314 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.148810 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.648792916 +0000 UTC m=+169.156963720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.155120 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fzfxb" event={"ID":"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f","Type":"ContainerStarted","Data":"c444b91f33ea3387e798dae82c23a652d26afa98a076c64a875f62f144b8c851"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.177372 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qh5kz" event={"ID":"01e19302-0470-49dd-88d5-9a568e820278","Type":"ContainerStarted","Data":"3b3c319f16c2221244c0eb7a38fdc5e703bda0da1f567d913f202c11977b7ecb"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.180998 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" event={"ID":"74b5802b-b8fb-48d1-8723-2c78386825db","Type":"ContainerStarted","Data":"4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.182379 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" event={"ID":"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7","Type":"ContainerStarted","Data":"8331e16235e5a94b4f704932116317a3f82336aecaf9afbef2cef9053d0f0822"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.196651 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" event={"ID":"8be343e6-2e63-426f-95cc-06f64f7417cc","Type":"ContainerStarted","Data":"b822a32ac38f1696b3489f1d2e55ca92480ca95e59d1e7e33e05c35e97623ba2"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.234506 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" event={"ID":"57598ddc-f214-47b1-bdef-10bdf94607d1","Type":"ContainerStarted","Data":"fbcac6d52a862a259ed1af9b8d6a9518cd40e3f84f4fd2d0d85a19ce79e560a0"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.248855 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.249930 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.74989952 +0000 UTC m=+169.258070324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.253864 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.254098 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.75406311 +0000 UTC m=+169.262233914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.280257 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" event={"ID":"4d96cdec-34f1-44e2-9380-40475a720b31","Type":"ContainerStarted","Data":"abc69d5ee42bb4db1d2eae356a4d37a0e38909a824b894a227db13527ca68810"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.294915 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-hvbtc" podStartSLOduration=131.294899654 podStartE2EDuration="2m11.294899654s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.253777592 +0000 UTC m=+168.761948396" watchObservedRunningTime="2026-01-28 15:21:18.294899654 +0000 UTC m=+168.803070448" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.302108 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" event={"ID":"c7db3547-02a4-4214-ad16-0b513f48b6d7","Type":"ContainerStarted","Data":"0b74a8011b4d971bf548e32aa618c17dfae7d6618e9f16a264c61aa9908601ca"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.315980 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr"] Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.318019 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9f8ct" event={"ID":"3a1daea1-c036-4141-95dc-ce3567519970","Type":"ContainerStarted","Data":"148ee8da7835b5a10fc7953862e8b5815bc845e7df914412c85063ab85c35985"} Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.356980 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.357938 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.857922955 +0000 UTC m=+169.366093759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.458356 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.460550 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:18.960529934 +0000 UTC m=+169.468700788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.485601 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8l46h" podStartSLOduration=132.485577434 podStartE2EDuration="2m12.485577434s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.371770993 +0000 UTC m=+168.879941797" watchObservedRunningTime="2026-01-28 15:21:18.485577434 +0000 UTC m=+168.993748248" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.486145 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jrkdc" podStartSLOduration=131.48614085 podStartE2EDuration="2m11.48614085s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.484829262 +0000 UTC m=+168.993000056" watchObservedRunningTime="2026-01-28 15:21:18.48614085 +0000 UTC m=+168.994311654" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.535102 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hqzn" podStartSLOduration=132.535081967 podStartE2EDuration="2m12.535081967s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.522749682 +0000 UTC m=+169.030920496" watchObservedRunningTime="2026-01-28 15:21:18.535081967 +0000 UTC m=+169.043252761" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.560165 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.560584 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.060567489 +0000 UTC m=+169.568738293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.609001 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-chdx8" podStartSLOduration=131.608977521 podStartE2EDuration="2m11.608977521s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.558264253 +0000 UTC m=+169.066435057" watchObservedRunningTime="2026-01-28 15:21:18.608977521 +0000 UTC m=+169.117148325" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.644223 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zrrnn" podStartSLOduration=131.644150411 podStartE2EDuration="2m11.644150411s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.642746901 +0000 UTC m=+169.150917705" watchObservedRunningTime="2026-01-28 15:21:18.644150411 +0000 UTC m=+169.152321215" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.644860 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" podStartSLOduration=131.644855782 podStartE2EDuration="2m11.644855782s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.61140982 +0000 UTC m=+169.119580624" watchObservedRunningTime="2026-01-28 15:21:18.644855782 +0000 UTC m=+169.153026586" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.648028 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6596q"] Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.661813 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.662214 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.16220098 +0000 UTC m=+169.670371784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.769270 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.769691 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.269675589 +0000 UTC m=+169.777846393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.769822 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcdpp" podStartSLOduration=131.769808703 podStartE2EDuration="2m11.769808703s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:18.768926028 +0000 UTC m=+169.277096832" watchObservedRunningTime="2026-01-28 15:21:18.769808703 +0000 UTC m=+169.277979507" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.874258 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.874568 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.374556203 +0000 UTC m=+169.882727007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.925530 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.975926 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.976133 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.976189 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.976217 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.976240 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:18 crc kubenswrapper[4656]: E0128 15:21:18.977313 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.477288476 +0000 UTC m=+169.985459280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.978963 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.988196 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:18 crc kubenswrapper[4656]: I0128 15:21:18.988750 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.003044 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.086044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.086721 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.586706351 +0000 UTC m=+170.094877145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.086821 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.094665 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.168963 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.169311 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.187606 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.187950 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.68792627 +0000 UTC m=+170.196097074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.188041 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.188408 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.688399163 +0000 UTC m=+170.196569967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.205963 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" podStartSLOduration=132.205940938 podStartE2EDuration="2m12.205940938s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.129413598 +0000 UTC m=+169.637584402" watchObservedRunningTime="2026-01-28 15:21:19.205940938 +0000 UTC m=+169.714111742" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.218282 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.291932 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.292567 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.792547077 +0000 UTC m=+170.300717881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.306436 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq"] Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.306471 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj"] Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.368232 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qh5kz" event={"ID":"01e19302-0470-49dd-88d5-9a568e820278","Type":"ContainerStarted","Data":"1a1f7fa9e4a70f8c3d2d190949dcf97438110261ae70805c3877e4f327e5704a"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.398670 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.399004 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:19.898992386 +0000 UTC m=+170.407163190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.399034 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" event={"ID":"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4","Type":"ContainerStarted","Data":"b94ea7db76aa2d3c56f14a78828a9083d75db0fe374f0cb5f89a603eac9f45d0"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.414398 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" event={"ID":"7f9360e3-7265-42ec-b104-d62ab6ec66f4","Type":"ContainerStarted","Data":"e87c18379671a289316957a7efd3141d299c15a91e36bdf01fb658feeb82828d"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.429424 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" event={"ID":"e4b1709f-307f-47d0-8648-125e2514c80e","Type":"ContainerStarted","Data":"d53619bcd27728a4abf49cb6582faadcbd201c77d2a5febb04ed32ad423d252d"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.471643 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.472002 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.472383 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" event={"ID":"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916","Type":"ContainerStarted","Data":"ca0d33b5636daa18e114b9beb1ea9b05a2b3c0ca026d4f1dd7c94e49f0699bc4"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.493244 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9f8ct" event={"ID":"3a1daea1-c036-4141-95dc-ce3567519970","Type":"ContainerStarted","Data":"0949abc4bb4fdcf2e7310573f02d0a8320c185962ca2140cd89620611e66aaa5"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.499660 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.500427 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.0003861 +0000 UTC m=+170.508556914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.516499 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" podStartSLOduration=133.516470852 podStartE2EDuration="2m13.516470852s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.499888566 +0000 UTC m=+170.008059370" watchObservedRunningTime="2026-01-28 15:21:19.516470852 +0000 UTC m=+170.024641656" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.543030 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" event={"ID":"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7","Type":"ContainerStarted","Data":"2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.543484 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.560334 4656 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4twj5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.560405 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.578324 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rmnzt" podStartSLOduration=133.57830574 podStartE2EDuration="2m13.57830574s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.57659502 +0000 UTC m=+170.084765824" watchObservedRunningTime="2026-01-28 15:21:19.57830574 +0000 UTC m=+170.086476544" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.614415 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.616092 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.116078875 +0000 UTC m=+170.624249679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.643776 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.653770 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" event={"ID":"c96c02d3-be15-47e5-a4bd-e65644751b10","Type":"ContainerStarted","Data":"9ad3257020ab53e6d6e7736abdaf27636b91d841fdedf6bab1acda9d173e8177"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.683343 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerStarted","Data":"93539b6344f80c28c33a800b6b17b3b195013261b9320da802d38ff972cee5ee"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.684329 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.692464 4656 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-66pz7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.692520 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.717799 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.719109 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.219084196 +0000 UTC m=+170.727255000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.740668 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" event={"ID":"68e8cbb8-5319-4c56-9636-3bcefa32d29e","Type":"ContainerStarted","Data":"97c4b00ccbadf2d8928e7cf1797f8981e4c7c8d407a4cbc824055c6fda18731f"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.757366 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" podStartSLOduration=133.757344015 podStartE2EDuration="2m13.757344015s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.741734467 +0000 UTC m=+170.249905271" watchObservedRunningTime="2026-01-28 15:21:19.757344015 +0000 UTC m=+170.265514819" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.815960 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" event={"ID":"74b5802b-b8fb-48d1-8723-2c78386825db","Type":"ContainerStarted","Data":"a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.816863 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.821930 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.823014 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.322999372 +0000 UTC m=+170.831170176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.833204 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" event={"ID":"c7db3547-02a4-4214-ad16-0b513f48b6d7","Type":"ContainerStarted","Data":"6fd553595df3877db63f6eb1972718c3ac308dc4aee9362b5e813833058f9b2a"} Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.833248 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.862445 4656 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jpgn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.862489 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.862646 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-98brw" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.878432 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9f8ct" podStartSLOduration=6.878411225 podStartE2EDuration="6.878411225s" podCreationTimestamp="2026-01-28 15:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.877415496 +0000 UTC m=+170.385586300" watchObservedRunningTime="2026-01-28 15:21:19.878411225 +0000 UTC m=+170.386582029" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.923405 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:19 crc kubenswrapper[4656]: E0128 15:21:19.926610 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.426590389 +0000 UTC m=+170.934761193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.936060 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lfmv6" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.955284 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" podStartSLOduration=132.955258533 podStartE2EDuration="2m12.955258533s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:19.914248145 +0000 UTC m=+170.422418949" watchObservedRunningTime="2026-01-28 15:21:19.955258533 +0000 UTC m=+170.463429337" Jan 28 15:21:19 crc kubenswrapper[4656]: I0128 15:21:19.956497 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.025571 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.026147 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.52613372 +0000 UTC m=+171.034304524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.055028 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" podStartSLOduration=133.05500506 podStartE2EDuration="2m13.05500506s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.051994464 +0000 UTC m=+170.560165268" watchObservedRunningTime="2026-01-28 15:21:20.05500506 +0000 UTC m=+170.563175864" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.056046 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sg88v"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.094868 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.117854 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tvwnv"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.138344 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bbdhj" podStartSLOduration=133.138326285 podStartE2EDuration="2m13.138326285s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.131222001 +0000 UTC m=+170.639392795" watchObservedRunningTime="2026-01-28 15:21:20.138326285 +0000 UTC m=+170.646497089" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.138830 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.139192 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.639148968 +0000 UTC m=+171.147319772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: W0128 15:21:20.147715 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39d8752e_2237_4115_b66c_a9afc736dffe.slice/crio-a51e5a114f6923b38afae7621e06fcf6d11d5519f48c3207f94d9dc64f5ccb33 WatchSource:0}: Error finding container a51e5a114f6923b38afae7621e06fcf6d11d5519f48c3207f94d9dc64f5ccb33: Status 404 returned error can't find the container with id a51e5a114f6923b38afae7621e06fcf6d11d5519f48c3207f94d9dc64f5ccb33 Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.151793 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.173288 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z9pc5"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.199331 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podStartSLOduration=134.199311928 podStartE2EDuration="2m14.199311928s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.17922848 +0000 UTC m=+170.687399294" watchObservedRunningTime="2026-01-28 15:21:20.199311928 +0000 UTC m=+170.707482732" Jan 28 15:21:20 crc kubenswrapper[4656]: W0128 15:21:20.230353 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c23ee3_ca6a_4660_ab9f_84c7d70b7a30.slice/crio-5138d54fc7537701a7c5ffa731096fa8969cc6fe7bb3ca7060f2413eca99bb60 WatchSource:0}: Error finding container 5138d54fc7537701a7c5ffa731096fa8969cc6fe7bb3ca7060f2413eca99bb60: Status 404 returned error can't find the container with id 5138d54fc7537701a7c5ffa731096fa8969cc6fe7bb3ca7060f2413eca99bb60 Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.244883 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.245250 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.745236137 +0000 UTC m=+171.253406941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.261102 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.365626 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.365969 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.865951877 +0000 UTC m=+171.374122681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.380020 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podStartSLOduration=133.380002491 podStartE2EDuration="2m13.380002491s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.378712214 +0000 UTC m=+170.886883018" watchObservedRunningTime="2026-01-28 15:21:20.380002491 +0000 UTC m=+170.888173295" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.384656 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-c8r6q" podStartSLOduration=133.384633454 podStartE2EDuration="2m13.384633454s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.342777351 +0000 UTC m=+170.850948155" watchObservedRunningTime="2026-01-28 15:21:20.384633454 +0000 UTC m=+170.892804258" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.466635 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.466917 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:20.966905358 +0000 UTC m=+171.475076162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.492265 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.567791 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.568200 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.068183729 +0000 UTC m=+171.576354523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.568293 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.568578 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.06857181 +0000 UTC m=+171.576742614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: W0128 15:21:20.596951 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode69663d4_ff3d_4991_804a_cf8d53a4c3ff.slice/crio-462599b582f5692efaa8c3917147798f2e336595316933b83b88f70d4dc57b35 WatchSource:0}: Error finding container 462599b582f5692efaa8c3917147798f2e336595316933b83b88f70d4dc57b35: Status 404 returned error can't find the container with id 462599b582f5692efaa8c3917147798f2e336595316933b83b88f70d4dc57b35 Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.645057 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.674776 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.675073 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.175056601 +0000 UTC m=+171.683227405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.712639 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.742861 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jgtrg"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.742923 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v"] Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.777320 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.777774 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.277754982 +0000 UTC m=+171.785925786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: W0128 15:21:20.853817 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod854de0fc_bfab_4d4d_9931_b84561234f71.slice/crio-de62fbe4037b1ede8c829c399b8607c7e9ad2cfbe947ef12fe8a1ad068c3ab4e WatchSource:0}: Error finding container de62fbe4037b1ede8c829c399b8607c7e9ad2cfbe947ef12fe8a1ad068c3ab4e: Status 404 returned error can't find the container with id de62fbe4037b1ede8c829c399b8607c7e9ad2cfbe947ef12fe8a1ad068c3ab4e Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.884676 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.884904 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.384873241 +0000 UTC m=+171.893044045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.884951 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.885302 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.385290493 +0000 UTC m=+171.893461287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.926293 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" event={"ID":"e4b1709f-307f-47d0-8648-125e2514c80e","Type":"ContainerStarted","Data":"1c8afc77c7e68866b80c94cdf59d386aa2a9748d272fc12b5304975aebcb58dc"} Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.930072 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.936633 4656 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qvpr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.936767 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" podUID="e4b1709f-307f-47d0-8648-125e2514c80e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.962716 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mzh45" event={"ID":"8be343e6-2e63-426f-95cc-06f64f7417cc","Type":"ContainerStarted","Data":"71d182287b788f87ff10ed63e21bfc3c2bd149cbc77d7f9ca171c01cc90db7ee"} Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.975610 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" podStartSLOduration=133.975592138 podStartE2EDuration="2m13.975592138s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:20.974902719 +0000 UTC m=+171.483073523" watchObservedRunningTime="2026-01-28 15:21:20.975592138 +0000 UTC m=+171.483762942" Jan 28 15:21:20 crc kubenswrapper[4656]: I0128 15:21:20.990797 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:20 crc kubenswrapper[4656]: E0128 15:21:20.991115 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.491096594 +0000 UTC m=+171.999267388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.047715 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" event={"ID":"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30","Type":"ContainerStarted","Data":"5138d54fc7537701a7c5ffa731096fa8969cc6fe7bb3ca7060f2413eca99bb60"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.062670 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" event={"ID":"585af9c8-b122-49d3-8640-3dc5fb1613ab","Type":"ContainerStarted","Data":"78b47b7183e7eda72ddade2e1c4a26af07b8b155c25b30e05133914f6c085165"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.062720 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" event={"ID":"585af9c8-b122-49d3-8640-3dc5fb1613ab","Type":"ContainerStarted","Data":"0bd41a410018df5e926d445f57e7f74c7045ed42299b9abc86fc3b13ea531745"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.066508 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" event={"ID":"50ad627b-637f-4763-96c1-4c1beb352c70","Type":"ContainerStarted","Data":"79341d3335196fee4983755fab00ce18943dc4fd5779dc86a40aec7f62882ca8"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.067681 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fzfxb" event={"ID":"3d4974c3-6412-4c7b-a4b6-bd998d3fbe4f","Type":"ContainerStarted","Data":"191d80c0b580a061b613a915363bd4782cfd84d5cc4b1be7196124b809f57cc8"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.092140 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.092614 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.592596101 +0000 UTC m=+172.100766905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.124732 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" event={"ID":"c96c02d3-be15-47e5-a4bd-e65644751b10","Type":"ContainerStarted","Data":"c12c10b63d65aac5e1c73df108aa8d8aecd8217e8b9ca6647ef9e26b472d3491"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.129413 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fzfxb" podStartSLOduration=8.129387069 podStartE2EDuration="8.129387069s" podCreationTimestamp="2026-01-28 15:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:21.124077856 +0000 UTC m=+171.632248660" watchObservedRunningTime="2026-01-28 15:21:21.129387069 +0000 UTC m=+171.637557873" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.153332 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" event={"ID":"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428","Type":"ContainerStarted","Data":"e7a388047ae6559983f6c0d263656f61087cd167de19ccfd97543f5d6c14d8ed"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.184442 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vqcvt" podStartSLOduration=134.18441907 podStartE2EDuration="2m14.18441907s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:21.179654223 +0000 UTC m=+171.687825027" watchObservedRunningTime="2026-01-28 15:21:21.18441907 +0000 UTC m=+171.692589874" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.223427 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.224961 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.724942965 +0000 UTC m=+172.233113769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.283838 4656 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-66pz7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.283883 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.284159 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" event={"ID":"25578c16-69f7-48c0-8a44-040950b9b8a1","Type":"ContainerStarted","Data":"8b5651c9577faa4a2a76aaa78d0b3751ec036612cc4cb26724879ce32d32d8f2"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.284204 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" event={"ID":"25578c16-69f7-48c0-8a44-040950b9b8a1","Type":"ContainerStarted","Data":"1e258fe47c6a006d11d349c174bef29feb367cacb2c7876fb9105c8ca262268d"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.284217 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"f7966fca94c13ecad508e9989912c019094f7ff5c7625b996d9d7524907e4a42"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.284226 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerStarted","Data":"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.303072 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" event={"ID":"7f9360e3-7265-42ec-b104-d62ab6ec66f4","Type":"ContainerStarted","Data":"5d4ee4911f1ddfb54dd51b7df7399797ec6c0ec648b61bf998307bbb48e3c1db"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.314984 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" event={"ID":"39d8752e-2237-4115-b66c-a9afc736dffe","Type":"ContainerStarted","Data":"a51e5a114f6923b38afae7621e06fcf6d11d5519f48c3207f94d9dc64f5ccb33"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.323393 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" event={"ID":"e69663d4-ff3d-4991-804a-cf8d53a4c3ff","Type":"ContainerStarted","Data":"462599b582f5692efaa8c3917147798f2e336595316933b83b88f70d4dc57b35"} Jan 28 15:21:21 crc kubenswrapper[4656]: W0128 15:21:21.323519 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-09136cbfeb95e9fc4f334503a475c8bab1cd4bc5053014a68c636f1741a3adbd WatchSource:0}: Error finding container 09136cbfeb95e9fc4f334503a475c8bab1cd4bc5053014a68c636f1741a3adbd: Status 404 returned error can't find the container with id 09136cbfeb95e9fc4f334503a475c8bab1cd4bc5053014a68c636f1741a3adbd Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.327372 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.328628 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.828614854 +0000 UTC m=+172.336785658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.348303 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" event={"ID":"390f2e18-1f51-46da-93cf-da6b0d524b0d","Type":"ContainerStarted","Data":"e06b6a69c3004d64fea7e46c4d54ea1a870ee797742cbc9b158b473a61cc59a8"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.350153 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" event={"ID":"4d96cdec-34f1-44e2-9380-40475a720b31","Type":"ContainerStarted","Data":"6b567df716488861abda19cfde6a0636d183de50de321377ec3111ed56b7a17d"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.384059 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" event={"ID":"ec3667a0-3a61-42f4-85cd-d9e7eb774fd4","Type":"ContainerStarted","Data":"569a67924dc0423418b224e60ea5b0e701d514611e9881ca6a7f2dc92dabeb50"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.384949 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.438018 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.438518 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.938450791 +0000 UTC m=+172.446621595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.439416 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.440351 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" event={"ID":"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916","Type":"ContainerStarted","Data":"006e3ffcc8aed4941813fc81e75331ea15a04953fdfb85cb59fdeab0b476b190"} Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.442777 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:21.942763865 +0000 UTC m=+172.450934669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.497576 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" event={"ID":"d8d474a0-0bf1-467d-ab77-4b94a17f7881","Type":"ContainerStarted","Data":"44b4fa9e42e4d262fbcc0a9185da187df76399889a9b594190ea2609a576170c"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.510465 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" event={"ID":"c50cc4de-dd25-4337-a532-3384d5a87626","Type":"ContainerStarted","Data":"b6ef49c0366fd30b3220d08592ed50831f49c59a0192b23fe66ac253689a6e5d"} Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.522194 4656 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9hdsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.522248 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" podUID="ec3667a0-3a61-42f4-85cd-d9e7eb774fd4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.541290 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.541669 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.041647907 +0000 UTC m=+172.549818711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.650029 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.651830 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.151815603 +0000 UTC m=+172.659986407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.752277 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.752482 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.252450655 +0000 UTC m=+172.760621459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.752742 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.753229 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.253214707 +0000 UTC m=+172.761385511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.854272 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.854625 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.354610931 +0000 UTC m=+172.862781735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.900233 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.907118 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:21 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:21 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:21 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.907155 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:21 crc kubenswrapper[4656]: I0128 15:21:21.956912 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:21 crc kubenswrapper[4656]: E0128 15:21:21.957360 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.457345914 +0000 UTC m=+172.965516718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.058547 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.058822 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.5588065 +0000 UTC m=+173.066977294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.161022 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.161384 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.661370127 +0000 UTC m=+173.169540931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.264279 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.264651 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.764628365 +0000 UTC m=+173.272799169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.264857 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.265217 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.765209342 +0000 UTC m=+173.273380146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.367977 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.368451 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.868434738 +0000 UTC m=+173.376605542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.418657 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.471116 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.472555 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:22.972542421 +0000 UTC m=+173.480713225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.515748 4656 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jpgn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.515820 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.562578 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" event={"ID":"390f2e18-1f51-46da-93cf-da6b0d524b0d","Type":"ContainerStarted","Data":"2427060bc22a7a2c3cbdc04fa905915dd99eada2e9402ee77171e61ec043a53b"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.574972 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.576897 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.076876979 +0000 UTC m=+173.585047793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.603471 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jgtrg" event={"ID":"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938","Type":"ContainerStarted","Data":"ee0ff65b029e86693fa9cc594e9988ae98f70684291f645867a573844bbfbe5e"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.603527 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jgtrg" event={"ID":"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938","Type":"ContainerStarted","Data":"319f8803786fa85989a749b450f5e4f43e414499c7cbd7006a56cebe692b25c6"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.633884 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"85f7d091566a3d3b830b63d15a60ebe6ef417e94f6ca3320950ea19c5c587c5f"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.633930 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"09136cbfeb95e9fc4f334503a475c8bab1cd4bc5053014a68c636f1741a3adbd"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.634466 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.679135 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.680489 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.180472117 +0000 UTC m=+173.688642911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.682184 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" event={"ID":"50ad627b-637f-4763-96c1-4c1beb352c70","Type":"ContainerStarted","Data":"50356fe90f10b02ee6f2f3e97a6f34d34ffe9764a56a4ac5834da65785f675c7"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.682226 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" event={"ID":"50ad627b-637f-4763-96c1-4c1beb352c70","Type":"ContainerStarted","Data":"3699701108a2dd7eac0729a5347663b8030b7f697b1d552eb5b050d536a9ac54"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.682880 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.696236 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bd1b7a35c96020919b0abc5141f1aa1a830266231d673e2f939a4f09a82046cd"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.696286 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1c5e2bb2ad9bb3bb6d579fdef0a1cda31d7e0e25cb57f404fa359cf0e592bafd"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.732470 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" event={"ID":"854de0fc-bfab-4d4d-9931-b84561234f71","Type":"ContainerStarted","Data":"74af8a954515c5f90d6338105118243f152ce5645fbf58b6274ebf6d47725d98"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.732511 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" event={"ID":"854de0fc-bfab-4d4d-9931-b84561234f71","Type":"ContainerStarted","Data":"de62fbe4037b1ede8c829c399b8607c7e9ad2cfbe947ef12fe8a1ad068c3ab4e"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.732716 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.734522 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" event={"ID":"51c23ee3-ca6a-4660-ab9f-84c7d70b7a30","Type":"ContainerStarted","Data":"2b0a5de010c4db3b7a484a39aa308dddc2f8add652592a6831edf431cf107c5b"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.735860 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"90d4b7e1f8ed086cfdd394059106600cbb6e1094f9dd37a820a2a62af2d50f12"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.735880 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4fdc909d2a176936ea42c47b65103741119040f5aa1bf860f5cd39d83b29c7ce"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.738183 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" event={"ID":"585af9c8-b122-49d3-8640-3dc5fb1613ab","Type":"ContainerStarted","Data":"9c12df696a51f2e3474b678ad4f0af48e2358c5f5fcf0c7bd1ce3d741ee7098e"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.739752 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" event={"ID":"d05cf23a-0ecb-4cd3-bafe-0fd7d930d916","Type":"ContainerStarted","Data":"84b271ee617c17b33eddfa22de05223bfee8268debf4315845201a32c6ffe279"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.741690 4656 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-879dh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.741728 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" podUID="854de0fc-bfab-4d4d-9931-b84561234f71" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.742136 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" event={"ID":"c50cc4de-dd25-4337-a532-3384d5a87626","Type":"ContainerStarted","Data":"f8cf3e2f02a8adc27690493735a08ec17bba5663accabe567959ff41a13aa583"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.756296 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerStarted","Data":"7edc8ce80170fb625b340ebafd351b32972cf8b8df5461abd8c2d61d65e5ac65"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.756534 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerStarted","Data":"bd4bd9dfb1b4432e1ef6472981e4d256f892f8132aca7a1e3a36d9ba43d88ca5"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.769427 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" event={"ID":"39d8752e-2237-4115-b66c-a9afc736dffe","Type":"ContainerStarted","Data":"165dc058142988df212c57bce3d513d2e48f4fd66ecb8b364f8a450eba17a7cd"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.769474 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" event={"ID":"39d8752e-2237-4115-b66c-a9afc736dffe","Type":"ContainerStarted","Data":"c71633dcf0ca06c5a43ba83df489e09335fb57f25535cc9c4a512daf8db1a3a1"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.774071 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" event={"ID":"d8d474a0-0bf1-467d-ab77-4b94a17f7881","Type":"ContainerStarted","Data":"dd11706f89828f6e170d52ed4ad520fc7897069f7694eeaa5ae551cf364dbd25"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.774109 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" event={"ID":"d8d474a0-0bf1-467d-ab77-4b94a17f7881","Type":"ContainerStarted","Data":"dff9ebfc1952e8b2fe521ef41e8c4466736015885dcee3079e2874673c0317f6"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.775726 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" event={"ID":"a08f3bdf-18fe-4bc2-aee2-10a3dec3f428","Type":"ContainerStarted","Data":"6184fea5b3a63f86696cd77a9ac0ea91e06ad7dae86e086c92f6046820633bab"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.781648 4656 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-66pz7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.781709 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.781764 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" event={"ID":"e69663d4-ff3d-4991-804a-cf8d53a4c3ff","Type":"ContainerStarted","Data":"04e01955463a69d93a67e7bc1115f506b452a1e88766b311bb4eecc1a0486190"} Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.782102 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.785618 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.285596248 +0000 UTC m=+173.793767042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.786579 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.788014 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.288007017 +0000 UTC m=+173.796177821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.834142 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qvpr" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.888916 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.890669 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.390648597 +0000 UTC m=+173.898819391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.917383 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:22 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:22 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:22 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.917461 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.919411 4656 patch_prober.go:28] interesting pod/apiserver-76f77b778f-j8tlz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]log ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]etcd ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/max-in-flight-filter ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 15:21:22 crc kubenswrapper[4656]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 15:21:22 crc kubenswrapper[4656]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-startinformers ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 15:21:22 crc kubenswrapper[4656]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 15:21:22 crc kubenswrapper[4656]: livez check failed Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.919497 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" podUID="57598ddc-f214-47b1-bdef-10bdf94607d1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.948050 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xlcjq" podStartSLOduration=135.948023116 podStartE2EDuration="2m15.948023116s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:22.933815408 +0000 UTC m=+173.441986212" watchObservedRunningTime="2026-01-28 15:21:22.948023116 +0000 UTC m=+173.456193920" Jan 28 15:21:22 crc kubenswrapper[4656]: I0128 15:21:22.993563 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:22 crc kubenswrapper[4656]: E0128 15:21:22.996774 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.496749147 +0000 UTC m=+174.004919951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.084191 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-62tcz" podStartSLOduration=136.084147968 podStartE2EDuration="2m16.084147968s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.083594783 +0000 UTC m=+173.591765607" watchObservedRunningTime="2026-01-28 15:21:23.084147968 +0000 UTC m=+173.592318782" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.094513 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.094900 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.594881777 +0000 UTC m=+174.103052581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.195750 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.196077 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.696061315 +0000 UTC m=+174.204232119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.241672 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" podStartSLOduration=137.241643315 podStartE2EDuration="2m17.241643315s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.234625063 +0000 UTC m=+173.742795867" watchObservedRunningTime="2026-01-28 15:21:23.241643315 +0000 UTC m=+173.749814119" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.297313 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.297537 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.79750191 +0000 UTC m=+174.305672714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.340914 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" podStartSLOduration=136.340888577 podStartE2EDuration="2m16.340888577s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.338820028 +0000 UTC m=+173.846990842" watchObservedRunningTime="2026-01-28 15:21:23.340888577 +0000 UTC m=+173.849059381" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.343150 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tvwnv" podStartSLOduration=136.343138202 podStartE2EDuration="2m16.343138202s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.282140919 +0000 UTC m=+173.790311723" watchObservedRunningTime="2026-01-28 15:21:23.343138202 +0000 UTC m=+173.851309006" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.389966 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6zb9x" podStartSLOduration=137.389945537 podStartE2EDuration="2m17.389945537s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.389917146 +0000 UTC m=+173.898087950" watchObservedRunningTime="2026-01-28 15:21:23.389945537 +0000 UTC m=+173.898116341" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.399070 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.399681 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:23.899656636 +0000 UTC m=+174.407827440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.459225 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qh5kz" podStartSLOduration=136.459195728 podStartE2EDuration="2m16.459195728s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.450798526 +0000 UTC m=+173.958969340" watchObservedRunningTime="2026-01-28 15:21:23.459195728 +0000 UTC m=+173.967366542" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.500907 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.501142 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.001106132 +0000 UTC m=+174.509276946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.501256 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.501557 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.001544445 +0000 UTC m=+174.509715249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.547766 4656 csr.go:261] certificate signing request csr-qq48q is approved, waiting to be issued Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.561422 4656 csr.go:257] certificate signing request csr-qq48q is issued Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.602190 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.602446 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.102405374 +0000 UTC m=+174.610576188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.602543 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.603017 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.103007161 +0000 UTC m=+174.611177965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.606081 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xczjn" podStartSLOduration=136.606062599 podStartE2EDuration="2m16.606062599s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.604522894 +0000 UTC m=+174.112693718" watchObservedRunningTime="2026-01-28 15:21:23.606062599 +0000 UTC m=+174.114233403" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.607133 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdplj" podStartSLOduration=136.607127619 podStartE2EDuration="2m16.607127619s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.508274758 +0000 UTC m=+174.016445562" watchObservedRunningTime="2026-01-28 15:21:23.607127619 +0000 UTC m=+174.115298423" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.704137 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.704459 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.204442926 +0000 UTC m=+174.712613730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.708887 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dxhzs" podStartSLOduration=136.708861573 podStartE2EDuration="2m16.708861573s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.667655019 +0000 UTC m=+174.175825823" watchObservedRunningTime="2026-01-28 15:21:23.708861573 +0000 UTC m=+174.217032397" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.738142 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-z9pc5" podStartSLOduration=135.738115134 podStartE2EDuration="2m15.738115134s" podCreationTimestamp="2026-01-28 15:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.709924444 +0000 UTC m=+174.218095258" watchObservedRunningTime="2026-01-28 15:21:23.738115134 +0000 UTC m=+174.246285938" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.763463 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" podStartSLOduration=138.763448712 podStartE2EDuration="2m18.763448712s" podCreationTimestamp="2026-01-28 15:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.740189614 +0000 UTC m=+174.248360428" watchObservedRunningTime="2026-01-28 15:21:23.763448712 +0000 UTC m=+174.271619516" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.781866 4656 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jpgn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.782230 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.782565 4656 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9hdsz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.782628 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" podUID="ec3667a0-3a61-42f4-85cd-d9e7eb774fd4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.783406 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"047624a09e8965505784965d1c922c05ab8cc3345ea2d1d455281b6afecd6680"} Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.784777 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jgtrg" event={"ID":"0f9c8e83-6ca2-4b24-9f50-4d7d48af3938","Type":"ContainerStarted","Data":"15db3dae36d477e53fb31eb72e460d2ed94442d08f4c61f3d1f1990808794d5e"} Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.785023 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.788619 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerStarted","Data":"14efe49be0f1cf209a9694e16a2c7e25612b1e5546c3a42f5bd715797b6c45e7"} Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.815092 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.820041 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.320018788 +0000 UTC m=+174.828189632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.827556 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm7ls" podStartSLOduration=136.827532484 podStartE2EDuration="2m16.827532484s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.824783165 +0000 UTC m=+174.332953989" watchObservedRunningTime="2026-01-28 15:21:23.827532484 +0000 UTC m=+174.335703288" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.875102 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-879dh" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.887240 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w7bws" podStartSLOduration=136.887213499 podStartE2EDuration="2m16.887213499s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.885547421 +0000 UTC m=+174.393718225" watchObservedRunningTime="2026-01-28 15:21:23.887213499 +0000 UTC m=+174.395384313" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.887350 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" podStartSLOduration=136.887345673 podStartE2EDuration="2m16.887345673s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.849537716 +0000 UTC m=+174.357708520" watchObservedRunningTime="2026-01-28 15:21:23.887345673 +0000 UTC m=+174.395516477" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.897752 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:23 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:23 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:23 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.897833 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.920465 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.921204 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.420764133 +0000 UTC m=+174.928934937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:23 crc kubenswrapper[4656]: I0128 15:21:23.921324 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:23 crc kubenswrapper[4656]: E0128 15:21:23.922389 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.42237414 +0000 UTC m=+174.930544954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.022678 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.023129 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.523107645 +0000 UTC m=+175.031278449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.060810 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6596q" podStartSLOduration=136.060783958 podStartE2EDuration="2m16.060783958s" podCreationTimestamp="2026-01-28 15:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:24.058910414 +0000 UTC m=+174.567081218" watchObservedRunningTime="2026-01-28 15:21:24.060783958 +0000 UTC m=+174.568954762" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.062579 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" podStartSLOduration=137.062565629 podStartE2EDuration="2m17.062565629s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.991723833 +0000 UTC m=+174.499894637" watchObservedRunningTime="2026-01-28 15:21:24.062565629 +0000 UTC m=+174.570736443" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.124511 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.124867 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.624854359 +0000 UTC m=+175.133025163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.144716 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jgtrg" podStartSLOduration=11.144696539 podStartE2EDuration="11.144696539s" podCreationTimestamp="2026-01-28 15:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:24.136934236 +0000 UTC m=+174.645105040" watchObservedRunningTime="2026-01-28 15:21:24.144696539 +0000 UTC m=+174.652867343" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.225970 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.226194 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.72613455 +0000 UTC m=+175.234305374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.226337 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.226764 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.726743617 +0000 UTC m=+175.234914421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.319265 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.320539 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.324268 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.326967 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.327203 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.827154203 +0000 UTC m=+175.335325017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.327514 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.327869 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.827853713 +0000 UTC m=+175.336024527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.355618 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.429435 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.429771 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.929736422 +0000 UTC m=+175.437907226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.430373 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.430474 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.430565 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.430684 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.431057 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:24.931044269 +0000 UTC m=+175.439215073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.481187 4656 patch_prober.go:28] interesting pod/apiserver-76f77b778f-j8tlz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]log ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]etcd ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/max-in-flight-filter ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 15:21:24 crc kubenswrapper[4656]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-startinformers ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 15:21:24 crc kubenswrapper[4656]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 15:21:24 crc kubenswrapper[4656]: livez check failed Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.481951 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" podUID="57598ddc-f214-47b1-bdef-10bdf94607d1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.510974 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.512260 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: W0128 15:21:24.524724 4656 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.524784 4656 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.531511 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.531702 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.031663911 +0000 UTC m=+175.539834725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.531750 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.531921 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.531998 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.532069 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.532394 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.532554 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.032533736 +0000 UTC m=+175.540704530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.532684 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.546099 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.560328 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") pod \"certified-operators-gzr9v\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.564311 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 15:16:23 +0000 UTC, rotation deadline is 2026-10-14 07:46:28.543650988 +0000 UTC Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.564572 4656 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6208h25m3.979083081s for next certificate rotation Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.635603 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.637373 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.637611 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.637643 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.637664 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.637906 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.137887234 +0000 UTC m=+175.646058038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.648320 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hdsz" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.738689 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739005 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739041 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739086 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739277 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.739461 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.239445613 +0000 UTC m=+175.747616477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.739531 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.756793 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.757760 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.807480 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.807538 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.807502 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.807945 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.826434 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"2406d77731cc6ca8f0db660ba1c05448b07d90cdfcff6256ad01a1a76fbffb6e"} Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.841007 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.841219 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.841246 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.841285 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.841446 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.341428104 +0000 UTC m=+175.849598908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.854454 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") pod \"community-operators-w4vpf\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.861908 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.901424 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:24 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:24 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:24 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.901487 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.942291 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.942368 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.942396 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.942440 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.943915 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.944322 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:24 crc kubenswrapper[4656]: E0128 15:21:24.944900 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.444883597 +0000 UTC m=+175.953054401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.971977 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:21:24 crc kubenswrapper[4656]: I0128 15:21:24.972865 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.003993 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") pod \"certified-operators-5p48j\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.009547 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.046811 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.047141 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.047228 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.047262 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.047357 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.547340042 +0000 UTC m=+176.055510846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.089623 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.148870 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.148922 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.148953 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.149001 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.150599 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.650577599 +0000 UTC m=+176.158748403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.151093 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.153445 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.235079 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") pod \"community-operators-szsbj\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.255448 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.258550 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.758517011 +0000 UTC m=+176.266687815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.358244 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.358677 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.858658388 +0000 UTC m=+176.366829192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.459969 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.460681 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:25.96066217 +0000 UTC m=+176.468832974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.478335 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.478376 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.493920 4656 patch_prober.go:28] interesting pod/console-f9d7485db-jrkdc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.494208 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jrkdc" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.538737 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.547303 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.549507 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.565502 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.566830 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.066812971 +0000 UTC m=+176.574983775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.671867 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.672308 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.172288752 +0000 UTC m=+176.680459556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.725971 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.774049 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.774557 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.274531961 +0000 UTC m=+176.782702775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.853408 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerStarted","Data":"c98421a3a0c42729a0b7a3850570dd8ac89ef8e57c178261115d715189f4f351"} Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.860504 4656 generic.go:334] "Generic (PLEG): container finished" podID="25578c16-69f7-48c0-8a44-040950b9b8a1" containerID="8b5651c9577faa4a2a76aaa78d0b3751ec036612cc4cb26724879ce32d32d8f2" exitCode=0 Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.860603 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" event={"ID":"25578c16-69f7-48c0-8a44-040950b9b8a1","Type":"ContainerDied","Data":"8b5651c9577faa4a2a76aaa78d0b3751ec036612cc4cb26724879ce32d32d8f2"} Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.878989 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.879357 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.379340333 +0000 UTC m=+176.887511137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.910540 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:25 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:25 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:25 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.910609 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:25 crc kubenswrapper[4656]: I0128 15:21:25.983151 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:25 crc kubenswrapper[4656]: E0128 15:21:25.984730 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.484707651 +0000 UTC m=+176.992878515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.084917 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.085236 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.5852161 +0000 UTC m=+177.093386904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.187084 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.187849 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.687832859 +0000 UTC m=+177.196003673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.291899 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.292260 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.79224444 +0000 UTC m=+177.300415244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.292480 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.312603 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.313632 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.329688 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.331122 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.353990 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.392986 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.393066 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.393101 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.393137 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.399334 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.899314998 +0000 UTC m=+177.407485882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.494321 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.494532 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.494645 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.494681 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.495153 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.495561 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:26.995541663 +0000 UTC m=+177.503712467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.495649 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.582235 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") pod \"redhat-marketplace-nhqpx\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.597081 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.597468 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.097455452 +0000 UTC m=+177.605626256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.668497 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.687947 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.699823 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.700354 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.200334839 +0000 UTC m=+177.708505643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.713858 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.724148 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.732199 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.778526 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.801774 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.802067 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.802371 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.802502 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.803030 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.30300824 +0000 UTC m=+177.811179074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.887793 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.897915 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.904700 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.905251 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.905287 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.905395 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: E0128 15:21:26.905922 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.405900277 +0000 UTC m=+177.914071081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.906807 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.907109 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.912518 4656 generic.go:334] "Generic (PLEG): container finished" podID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerID="6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a" exitCode=0 Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.912680 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerDied","Data":"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a"} Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.915560 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerStarted","Data":"97e67bc73ce17661993af527d6be763d8d88936ad8acbd056a6ba60090f1140e"} Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.926535 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerStarted","Data":"924a066d9ecea51a84946c62ff66f97e3aaf6278887d64d1b9553585e9692514"} Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.932577 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.934284 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:26 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:26 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:26 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.934341 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.942974 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") pod \"redhat-marketplace-cxc6z\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:26 crc kubenswrapper[4656]: I0128 15:21:26.966038 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"a3d37359b98e005965df7bde715526e78c09160603d980e647ece656e9f1f863"} Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.007497 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.011923 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.511908334 +0000 UTC m=+178.020079138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.115495 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.119881 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.120506 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.620485784 +0000 UTC m=+178.128656588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.223960 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.224567 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.724555325 +0000 UTC m=+178.232726129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.267720 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.325362 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.325742 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.825722493 +0000 UTC m=+178.333893297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.431361 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.431831 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:27.931813132 +0000 UTC m=+178.439983936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.434120 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.533749 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") pod \"25578c16-69f7-48c0-8a44-040950b9b8a1\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.533841 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") pod \"25578c16-69f7-48c0-8a44-040950b9b8a1\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.533984 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.534072 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") pod \"25578c16-69f7-48c0-8a44-040950b9b8a1\" (UID: \"25578c16-69f7-48c0-8a44-040950b9b8a1\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.534547 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.034514204 +0000 UTC m=+178.542685048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.535063 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "25578c16-69f7-48c0-8a44-040950b9b8a1" (UID: "25578c16-69f7-48c0-8a44-040950b9b8a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.556058 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "25578c16-69f7-48c0-8a44-040950b9b8a1" (UID: "25578c16-69f7-48c0-8a44-040950b9b8a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.557618 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9" (OuterVolumeSpecName: "kube-api-access-j9pk9") pod "25578c16-69f7-48c0-8a44-040950b9b8a1" (UID: "25578c16-69f7-48c0-8a44-040950b9b8a1"). InnerVolumeSpecName "kube-api-access-j9pk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.591519 4656 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.637068 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.637206 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9pk9\" (UniqueName: \"kubernetes.io/projected/25578c16-69f7-48c0-8a44-040950b9b8a1-kube-api-access-j9pk9\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.637220 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25578c16-69f7-48c0-8a44-040950b9b8a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.637229 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25578c16-69f7-48c0-8a44-040950b9b8a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.637492 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.137477723 +0000 UTC m=+178.645648527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.716592 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.738182 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.738384 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.238360702 +0000 UTC m=+178.746531516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.738474 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.738978 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.23896742 +0000 UTC m=+178.747138224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.839509 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.839857 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.339816938 +0000 UTC m=+178.847987742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.840007 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.840401 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.340391845 +0000 UTC m=+178.848562649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.900800 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:27 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:27 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:27 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.900856 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.916967 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.917231 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25578c16-69f7-48c0-8a44-040950b9b8a1" containerName="collect-profiles" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.917273 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="25578c16-69f7-48c0-8a44-040950b9b8a1" containerName="collect-profiles" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.917439 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="25578c16-69f7-48c0-8a44-040950b9b8a1" containerName="collect-profiles" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.918273 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.922522 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941236 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.941457 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.441431779 +0000 UTC m=+178.949602583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941538 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941624 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941665 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.941756 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:27 crc kubenswrapper[4656]: E0128 15:21:27.942120 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.442108988 +0000 UTC m=+178.950279792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:27 crc kubenswrapper[4656]: I0128 15:21:27.955180 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.002973 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerID="b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414" exitCode=0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.003110 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerDied","Data":"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.010918 4656 generic.go:334] "Generic (PLEG): container finished" podID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerID="cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98" exitCode=0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.011021 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerDied","Data":"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.016313 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.016666 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl" event={"ID":"25578c16-69f7-48c0-8a44-040950b9b8a1","Type":"ContainerDied","Data":"1e258fe47c6a006d11d349c174bef29feb367cacb2c7876fb9105c8ca262268d"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.016884 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e258fe47c6a006d11d349c174bef29feb367cacb2c7876fb9105c8ca262268d" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.027221 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" event={"ID":"4c5b7670-d8d9-4ea1-822d-788709c62ee5","Type":"ContainerStarted","Data":"d2e28b36c0eb1989052cb495e3bc543bf489515a0fd61fb01510c270a01d64cb"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.034885 4656 generic.go:334] "Generic (PLEG): container finished" podID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerID="daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050" exitCode=0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.035102 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerDied","Data":"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.035196 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerStarted","Data":"cf973c02c04ef3c79efcf712d2d128bb8313be52f3235da8828930c75b3c34ff"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.044448 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.044813 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.044895 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.044956 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.046126 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.546109947 +0000 UTC m=+179.054280751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.046663 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.048974 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.051267 4656 generic.go:334] "Generic (PLEG): container finished" podID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerID="142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8" exitCode=0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.051388 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerDied","Data":"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.051427 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerStarted","Data":"cf818feb09c0ccfd9e455faaffde6799b01a2c05ed95767814d1627d35d8c054"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.061516 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerStarted","Data":"7cabac754976a0ee49a3872bb0095429cd22f9300449f019606cabdd291f04c7"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.061568 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerStarted","Data":"675ef5c857ca65a9f9f28b7fc7e7d9027ae949f4a6695cc47226b6c49738e9b9"} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.064831 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-sg88v" podStartSLOduration=15.064797884 podStartE2EDuration="15.064797884s" podCreationTimestamp="2026-01-28 15:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:28.057315319 +0000 UTC m=+178.565486143" watchObservedRunningTime="2026-01-28 15:21:28.064797884 +0000 UTC m=+178.572968688" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.088132 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") pod \"redhat-operators-8dc6j\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.147030 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.148187 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.64815418 +0000 UTC m=+179.156324974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.236899 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.247927 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.248110 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.748078592 +0000 UTC m=+179.256249406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.248194 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.248555 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.748547676 +0000 UTC m=+179.256718480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.305662 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.306795 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.319735 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.352682 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.353089 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.353137 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.353940 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.853909674 +0000 UTC m=+179.362080488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.354310 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.355611 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.356765 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.856744115 +0000 UTC m=+179.364914919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5r48x" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.452498 4656 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T15:21:27.591558923Z","Handler":null,"Name":""} Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.457657 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.457948 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.458020 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.458038 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.459819 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.459846 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: E0128 15:21:28.459934 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:21:28.95991334 +0000 UTC m=+179.468084144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.480605 4656 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.480651 4656 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.496601 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") pod \"redhat-operators-zqqvv\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.542347 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:21:28 crc kubenswrapper[4656]: W0128 15:21:28.550101 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6b1aae7_caaa_427d_8b07_705b02e81763.slice/crio-4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f WatchSource:0}: Error finding container 4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f: Status 404 returned error can't find the container with id 4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.559317 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.561379 4656 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.561405 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.589922 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5r48x\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.642705 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.660315 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.705310 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.739615 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.898853 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:28 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:28 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:28 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.899313 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:28 crc kubenswrapper[4656]: I0128 15:21:28.980232 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.092342 4656 generic.go:334] "Generic (PLEG): container finished" podID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerID="98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388" exitCode=0 Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.093577 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerDied","Data":"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388"} Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.093609 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerStarted","Data":"4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f"} Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.124582 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerStarted","Data":"212e83a1856ef8c98ccafed8d21edbd6de5e1d834be4a94f2e2855ab3ac7c760"} Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.136534 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerID="7cabac754976a0ee49a3872bb0095429cd22f9300449f019606cabdd291f04c7" exitCode=0 Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.137287 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerDied","Data":"7cabac754976a0ee49a3872bb0095429cd22f9300449f019606cabdd291f04c7"} Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.185526 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.297376 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:21:29 crc kubenswrapper[4656]: W0128 15:21:29.315392 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5823f5c7_fabe_4d4b_a3df_49349749b19e.slice/crio-54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327 WatchSource:0}: Error finding container 54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327: Status 404 returned error can't find the container with id 54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327 Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.479510 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.486492 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-j8tlz" Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.898698 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:29 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:29 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:29 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:29 crc kubenswrapper[4656]: I0128 15:21:29.898791 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.185806 4656 generic.go:334] "Generic (PLEG): container finished" podID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerID="1c47ef6d923405e3cbd5ebf98dc7b072df4b7f29ebc5c89b0c18a12b52617312" exitCode=0 Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.185894 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerDied","Data":"1c47ef6d923405e3cbd5ebf98dc7b072df4b7f29ebc5c89b0c18a12b52617312"} Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.188579 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" event={"ID":"5823f5c7-fabe-4d4b-a3df-49349749b19e","Type":"ContainerStarted","Data":"54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327"} Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.236271 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.238810 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.250820 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.251443 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.254743 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.301153 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.301271 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.435665 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.435935 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.436320 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.483011 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.600669 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.897248 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:30 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:30 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:30 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:30 crc kubenswrapper[4656]: I0128 15:21:30.897343 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.265421 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" event={"ID":"5823f5c7-fabe-4d4b-a3df-49349749b19e","Type":"ContainerStarted","Data":"e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967"} Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.270111 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.310800 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" podStartSLOduration=144.310780105 podStartE2EDuration="2m24.310780105s" podCreationTimestamp="2026-01-28 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:31.304949087 +0000 UTC m=+181.813119891" watchObservedRunningTime="2026-01-28 15:21:31.310780105 +0000 UTC m=+181.818950909" Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.396641 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:21:31 crc kubenswrapper[4656]: W0128 15:21:31.466337 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda9bf2ac2_93ee_4cc9_b631_0c45a1326ad3.slice/crio-6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd WatchSource:0}: Error finding container 6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd: Status 404 returned error can't find the container with id 6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.898648 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:31 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:31 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:31 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:31 crc kubenswrapper[4656]: I0128 15:21:31.898739 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.352662 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3","Type":"ContainerStarted","Data":"6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd"} Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.898670 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:32 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:32 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:32 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.898753 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.958050 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.959609 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.962463 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.962488 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:21:32 crc kubenswrapper[4656]: I0128 15:21:32.963911 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.008758 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.008861 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.117974 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.118091 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.118254 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.159951 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.303057 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.397415 4656 generic.go:334] "Generic (PLEG): container finished" podID="a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" containerID="dcf5a8e47139b85a5597a192fd620df6c5b8cfb5364a35cb7f5372a9d06ff97e" exitCode=0 Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.397467 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3","Type":"ContainerDied","Data":"dcf5a8e47139b85a5597a192fd620df6c5b8cfb5364a35cb7f5372a9d06ff97e"} Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.887278 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.897236 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:33 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:33 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:33 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:33 crc kubenswrapper[4656]: I0128 15:21:33.897316 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.439711 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121","Type":"ContainerStarted","Data":"84c0329afaccfa22e761b86fa3fc6e73cbb668de3176eaf9528f58bd7bc3e101"} Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.817372 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.817436 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.817786 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.817848 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.871601 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jgtrg" Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.901475 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:34 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:34 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:34 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:34 crc kubenswrapper[4656]: I0128 15:21:34.901545 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.075027 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.167371 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") pod \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.167520 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") pod \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\" (UID: \"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3\") " Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.167725 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" (UID: "a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.167864 4656 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.176733 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" (UID: "a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.269278 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.482636 4656 patch_prober.go:28] interesting pod/console-f9d7485db-jrkdc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.482716 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jrkdc" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.543272 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3","Type":"ContainerDied","Data":"6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd"} Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.543345 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fb5ecaa8f9ecd2f1b198b56ef1b535a4ed8a50d87add9511fbf2112c0b02fcd" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.543264 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.546677 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121","Type":"ContainerStarted","Data":"f5d376ff74ccc7c835ab3d99bdbe7f68d5f51587b1e1eb01388c2f6b4b6034e4"} Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.611379 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.611357015 podStartE2EDuration="3.611357015s" podCreationTimestamp="2026-01-28 15:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:35.60907931 +0000 UTC m=+186.117250134" watchObservedRunningTime="2026-01-28 15:21:35.611357015 +0000 UTC m=+186.119527819" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.897960 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:35 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:35 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:35 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.898030 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.979905 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:35 crc kubenswrapper[4656]: I0128 15:21:35.982481 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.015355 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11320542-8463-40db-8981-632be2bd5a48-metrics-certs\") pod \"network-metrics-daemon-bmj6r\" (UID: \"11320542-8463-40db-8981-632be2bd5a48\") " pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.019935 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.028083 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bmj6r" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.580104 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jcx9v_0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9/cluster-samples-operator/0.log" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.580632 4656 generic.go:334] "Generic (PLEG): container finished" podID="0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9" containerID="7edc8ce80170fb625b340ebafd351b32972cf8b8df5461abd8c2d61d65e5ac65" exitCode=2 Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.580840 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerDied","Data":"7edc8ce80170fb625b340ebafd351b32972cf8b8df5461abd8c2d61d65e5ac65"} Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.582245 4656 scope.go:117] "RemoveContainer" containerID="7edc8ce80170fb625b340ebafd351b32972cf8b8df5461abd8c2d61d65e5ac65" Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.600622 4656 generic.go:334] "Generic (PLEG): container finished" podID="74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" containerID="f5d376ff74ccc7c835ab3d99bdbe7f68d5f51587b1e1eb01388c2f6b4b6034e4" exitCode=0 Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.600701 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121","Type":"ContainerDied","Data":"f5d376ff74ccc7c835ab3d99bdbe7f68d5f51587b1e1eb01388c2f6b4b6034e4"} Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.748593 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bmj6r"] Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.908379 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:36 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:36 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:36 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:36 crc kubenswrapper[4656]: I0128 15:21:36.908877 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.636709 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" event={"ID":"11320542-8463-40db-8981-632be2bd5a48","Type":"ContainerStarted","Data":"c36a75183fe84628d125ace03e17ed52ec119308ab220192ac09ebfaa96dda40"} Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.692624 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-jcx9v_0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9/cluster-samples-operator/0.log" Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.693380 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jcx9v" event={"ID":"0e017dd9-7ff8-43cf-bdf5-1a434dd4f3e9","Type":"ContainerStarted","Data":"f25ffe50d07dddcbf86b8873e292434a79dc1c8fd111f1caef1f6b655f90d681"} Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.896074 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:37 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:37 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:37 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:37 crc kubenswrapper[4656]: I0128 15:21:37.896952 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.097018 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.130792 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") pod \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.130891 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") pod \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\" (UID: \"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121\") " Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.132147 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" (UID: "74e9f8ac-c1b4-420c-b2b0-b08c05ae8121"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.174516 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" (UID: "74e9f8ac-c1b4-420c-b2b0-b08c05ae8121"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.232892 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.232934 4656 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74e9f8ac-c1b4-420c-b2b0-b08c05ae8121-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.733873 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" event={"ID":"11320542-8463-40db-8981-632be2bd5a48","Type":"ContainerStarted","Data":"a9342fcc2dba6f40a59075419b212328f9f3c4179cdf65e552f51f43de4c98e8"} Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.780302 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"74e9f8ac-c1b4-420c-b2b0-b08c05ae8121","Type":"ContainerDied","Data":"84c0329afaccfa22e761b86fa3fc6e73cbb668de3176eaf9528f58bd7bc3e101"} Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.780366 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c0329afaccfa22e761b86fa3fc6e73cbb668de3176eaf9528f58bd7bc3e101" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.780327 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.898570 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:38 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:38 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:38 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:38 crc kubenswrapper[4656]: I0128 15:21:38.898941 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:39 crc kubenswrapper[4656]: I0128 15:21:39.896885 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:39 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:39 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:39 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:39 crc kubenswrapper[4656]: I0128 15:21:39.896982 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:40 crc kubenswrapper[4656]: I0128 15:21:40.895904 4656 patch_prober.go:28] interesting pod/router-default-5444994796-qh5kz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:21:40 crc kubenswrapper[4656]: [-]has-synced failed: reason withheld Jan 28 15:21:40 crc kubenswrapper[4656]: [+]process-running ok Jan 28 15:21:40 crc kubenswrapper[4656]: healthz check failed Jan 28 15:21:40 crc kubenswrapper[4656]: I0128 15:21:40.896323 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qh5kz" podUID="01e19302-0470-49dd-88d5-9a568e820278" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.264320 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.264408 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.337958 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.897284 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:41 crc kubenswrapper[4656]: I0128 15:21:41.903330 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qh5kz" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.804971 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.805224 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.805387 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.805322 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.806292 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.807030 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.807059 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.807029 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63"} pod="openshift-console/downloads-7954f5f757-zrrnn" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 28 15:21:44 crc kubenswrapper[4656]: I0128 15:21:44.808696 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" containerID="cri-o://60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63" gracePeriod=2 Jan 28 15:21:45 crc kubenswrapper[4656]: I0128 15:21:45.490956 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:45 crc kubenswrapper[4656]: I0128 15:21:45.494726 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:21:45 crc kubenswrapper[4656]: I0128 15:21:45.887118 4656 generic.go:334] "Generic (PLEG): container finished" podID="d903ef3d-1544-4343-b254-15939a05fec0" containerID="60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63" exitCode=0 Jan 28 15:21:45 crc kubenswrapper[4656]: I0128 15:21:45.887225 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrrnn" event={"ID":"d903ef3d-1544-4343-b254-15939a05fec0","Type":"ContainerDied","Data":"60c65196fb4866dd0b1e0bbc6538e306f747567dc79d0ebdf9efbab4620baf63"} Jan 28 15:21:48 crc kubenswrapper[4656]: I0128 15:21:48.747910 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:21:54 crc kubenswrapper[4656]: I0128 15:21:54.805959 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:21:54 crc kubenswrapper[4656]: I0128 15:21:54.806543 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:21:56 crc kubenswrapper[4656]: I0128 15:21:56.723952 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wh267" Jan 28 15:21:59 crc kubenswrapper[4656]: I0128 15:21:59.132792 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:22:00 crc kubenswrapper[4656]: I0128 15:22:00.995121 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bmj6r" event={"ID":"11320542-8463-40db-8981-632be2bd5a48","Type":"ContainerStarted","Data":"284a932738339e158c1fb021a3fc51f5de0c01c0052089e71babe0ff6a3f5050"} Jan 28 15:22:01 crc kubenswrapper[4656]: I0128 15:22:01.019795 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bmj6r" podStartSLOduration=175.019772037 podStartE2EDuration="2m55.019772037s" podCreationTimestamp="2026-01-28 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:01.017324748 +0000 UTC m=+211.525495562" watchObservedRunningTime="2026-01-28 15:22:01.019772037 +0000 UTC m=+211.527942841" Jan 28 15:22:04 crc kubenswrapper[4656]: I0128 15:22:04.804134 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:04 crc kubenswrapper[4656]: I0128 15:22:04.804222 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743226 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:22:07 crc kubenswrapper[4656]: E0128 15:22:07.743640 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743657 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: E0128 15:22:07.743666 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743673 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743828 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e9f8ac-c1b4-420c-b2b0-b08c05ae8121" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.743843 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9bf2ac2-93ee-4cc9-b631-0c45a1326ad3" containerName="pruner" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.744343 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.750069 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.751717 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.752512 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.819638 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.819905 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.921513 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.921659 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.921699 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:07 crc kubenswrapper[4656]: I0128 15:22:07.945404 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:08 crc kubenswrapper[4656]: I0128 15:22:08.060047 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:09 crc kubenswrapper[4656]: E0128 15:22:09.693946 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:22:09 crc kubenswrapper[4656]: E0128 15:22:09.694331 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkjf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8dc6j_openshift-marketplace(a6b1aae7-caaa-427d-8b07-705b02e81763): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:09 crc kubenswrapper[4656]: E0128 15:22:09.696139 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8dc6j" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" Jan 28 15:22:10 crc kubenswrapper[4656]: E0128 15:22:10.709134 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8dc6j" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" Jan 28 15:22:10 crc kubenswrapper[4656]: E0128 15:22:10.865112 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:22:10 crc kubenswrapper[4656]: E0128 15:22:10.865341 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj8vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-szsbj_openshift-marketplace(d6812603-edd0-45f4-b2b3-6d9ece7e98c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:10 crc kubenswrapper[4656]: E0128 15:22:10.866537 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-szsbj" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.264774 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.265136 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.939656 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.940346 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:11 crc kubenswrapper[4656]: I0128 15:22:11.952319 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.020992 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.021419 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.021563 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.127013 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.127106 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.127270 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.128846 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.129129 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.149432 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") pod \"installer-9-crc\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: I0128 15:22:12.336337 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.696782 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-szsbj" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.795892 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.796127 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnkz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5p48j_openshift-marketplace(42c5c29d-eebc-40b2-8a6d-a7a592efd69d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.797504 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5p48j" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.815368 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.815555 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4wrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zqqvv_openshift-marketplace(d4371d7c-f72d-4765-9101-34946d11d0e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:12 crc kubenswrapper[4656]: E0128 15:22:12.816893 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zqqvv" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.237636 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5p48j" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.357338 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.357747 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2d5jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w4vpf_openshift-marketplace(7de9fc74-9948-4e73-ac93-25f9c22189ce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.359144 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w4vpf" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.364208 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.364322 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqjd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nhqpx_openshift-marketplace(fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.391415 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nhqpx" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.407607 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.407979 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lm8p5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cxc6z_openshift-marketplace(e4ed5142-92c2-4f59-a383-f91999ce3dff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.409334 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cxc6z" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.466707 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.467104 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftms7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gzr9v_openshift-marketplace(f0fe12da-fb7d-444b-b8d3-47e5988fb7f9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:14 crc kubenswrapper[4656]: E0128 15:22:14.468298 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gzr9v" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" Jan 28 15:22:14 crc kubenswrapper[4656]: I0128 15:22:14.771067 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:22:14 crc kubenswrapper[4656]: I0128 15:22:14.812884 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:14 crc kubenswrapper[4656]: I0128 15:22:14.812952 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:14 crc kubenswrapper[4656]: I0128 15:22:14.839459 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:22:14 crc kubenswrapper[4656]: W0128 15:22:14.845380 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode313033e_82cc_4bb8_8151_f867b966a330.slice/crio-c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef WatchSource:0}: Error finding container c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef: Status 404 returned error can't find the container with id c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.109069 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrrnn" event={"ID":"d903ef3d-1544-4343-b254-15939a05fec0","Type":"ContainerStarted","Data":"8c4abed353fff3de84778aa7698659c7b27f4547e5b6249ec3f720a9ea39afd8"} Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.111572 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.111698 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.111768 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.116675 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e313033e-82cc-4bb8-8151-f867b966a330","Type":"ContainerStarted","Data":"c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef"} Jan 28 15:22:15 crc kubenswrapper[4656]: I0128 15:22:15.128745 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23d13458-d18f-4e50-bd07-61d18319b4c7","Type":"ContainerStarted","Data":"ecf52d47e7e50c6dd46bf76ed0fc0d2fa9f91e02bebe98c13fefcc025a606204"} Jan 28 15:22:15 crc kubenswrapper[4656]: E0128 15:22:15.139645 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cxc6z" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" Jan 28 15:22:15 crc kubenswrapper[4656]: E0128 15:22:15.139861 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gzr9v" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" Jan 28 15:22:15 crc kubenswrapper[4656]: E0128 15:22:15.139934 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-w4vpf" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" Jan 28 15:22:15 crc kubenswrapper[4656]: E0128 15:22:15.140012 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nhqpx" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.135972 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e313033e-82cc-4bb8-8151-f867b966a330","Type":"ContainerDied","Data":"5dce75929921c7f43d6dd2a96e286682fa25de5c7e3b0e2c341f9b6c1171c3c3"} Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.135786 4656 generic.go:334] "Generic (PLEG): container finished" podID="e313033e-82cc-4bb8-8151-f867b966a330" containerID="5dce75929921c7f43d6dd2a96e286682fa25de5c7e3b0e2c341f9b6c1171c3c3" exitCode=0 Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.137974 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23d13458-d18f-4e50-bd07-61d18319b4c7","Type":"ContainerStarted","Data":"1b53bded64e844d0eb1de8d68357820c2e6d1bfc568f8f1b250cc8890c0cc777"} Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.138697 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.138924 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:16 crc kubenswrapper[4656]: I0128 15:22:16.179440 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.179414767 podStartE2EDuration="5.179414767s" podCreationTimestamp="2026-01-28 15:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:16.175949309 +0000 UTC m=+226.684120133" watchObservedRunningTime="2026-01-28 15:22:16.179414767 +0000 UTC m=+226.687585571" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.145429 4656 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrrnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.145497 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrrnn" podUID="d903ef3d-1544-4343-b254-15939a05fec0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.497691 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.555837 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") pod \"e313033e-82cc-4bb8-8151-f867b966a330\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.555897 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") pod \"e313033e-82cc-4bb8-8151-f867b966a330\" (UID: \"e313033e-82cc-4bb8-8151-f867b966a330\") " Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.556192 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e313033e-82cc-4bb8-8151-f867b966a330" (UID: "e313033e-82cc-4bb8-8151-f867b966a330"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.573039 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e313033e-82cc-4bb8-8151-f867b966a330" (UID: "e313033e-82cc-4bb8-8151-f867b966a330"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.657103 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e313033e-82cc-4bb8-8151-f867b966a330-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.657171 4656 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e313033e-82cc-4bb8-8151-f867b966a330-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:17 crc kubenswrapper[4656]: I0128 15:22:17.988350 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:22:18 crc kubenswrapper[4656]: I0128 15:22:18.151402 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e313033e-82cc-4bb8-8151-f867b966a330","Type":"ContainerDied","Data":"c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef"} Jan 28 15:22:18 crc kubenswrapper[4656]: I0128 15:22:18.151708 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28fd5ce720db7db4d295d50ce6c13119bb40ef0a2e85929d659b0e26d73b9ef" Jan 28 15:22:18 crc kubenswrapper[4656]: I0128 15:22:18.151501 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:22:24 crc kubenswrapper[4656]: I0128 15:22:24.810661 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zrrnn" Jan 28 15:22:27 crc kubenswrapper[4656]: I0128 15:22:27.235779 4656 generic.go:334] "Generic (PLEG): container finished" podID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerID="f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f" exitCode=0 Jan 28 15:22:27 crc kubenswrapper[4656]: I0128 15:22:27.235949 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerDied","Data":"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f"} Jan 28 15:22:27 crc kubenswrapper[4656]: I0128 15:22:27.241331 4656 generic.go:334] "Generic (PLEG): container finished" podID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerID="5b4233be7c7686331f99bc85bff4b844e11a65ed773671095a89af9d01cb614c" exitCode=0 Jan 28 15:22:27 crc kubenswrapper[4656]: I0128 15:22:27.241379 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerDied","Data":"5b4233be7c7686331f99bc85bff4b844e11a65ed773671095a89af9d01cb614c"} Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.264019 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.264589 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.264737 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.265967 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:22:41 crc kubenswrapper[4656]: I0128 15:22:41.266065 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1" gracePeriod=600 Jan 28 15:22:43 crc kubenswrapper[4656]: I0128 15:22:43.048475 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" containerID="cri-o://a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48" gracePeriod=15 Jan 28 15:22:43 crc kubenswrapper[4656]: I0128 15:22:43.349434 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1" exitCode=0 Jan 28 15:22:43 crc kubenswrapper[4656]: I0128 15:22:43.349492 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1"} Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.356076 4656 generic.go:334] "Generic (PLEG): container finished" podID="74b5802b-b8fb-48d1-8723-2c78386825db" containerID="a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48" exitCode=0 Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.356191 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" event={"ID":"74b5802b-b8fb-48d1-8723-2c78386825db","Type":"ContainerDied","Data":"a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48"} Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.836861 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.898491 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl"] Jan 28 15:22:44 crc kubenswrapper[4656]: E0128 15:22:44.898957 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e313033e-82cc-4bb8-8151-f867b966a330" containerName="pruner" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.898984 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e313033e-82cc-4bb8-8151-f867b966a330" containerName="pruner" Jan 28 15:22:44 crc kubenswrapper[4656]: E0128 15:22:44.899010 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.899018 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.899211 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" containerName="oauth-openshift" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.899425 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e313033e-82cc-4bb8-8151-f867b966a330" containerName="pruner" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.900009 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.906117 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl"] Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.948974 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949009 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949034 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949094 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949138 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949155 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949191 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949217 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949248 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949272 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949328 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949368 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949397 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.949426 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") pod \"74b5802b-b8fb-48d1-8723-2c78386825db\" (UID: \"74b5802b-b8fb-48d1-8723-2c78386825db\") " Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.950143 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.950821 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.950825 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.951497 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.951763 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.957795 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l" (OuterVolumeSpecName: "kube-api-access-lmz5l") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "kube-api-access-lmz5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.965274 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.966372 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.968151 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.969605 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.975603 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.976442 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.977213 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:44 crc kubenswrapper[4656]: I0128 15:22:44.977664 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "74b5802b-b8fb-48d1-8723-2c78386825db" (UID: "74b5802b-b8fb-48d1-8723-2c78386825db"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052002 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-router-certs\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052052 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052129 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052175 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052220 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052244 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-session\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052290 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhn8\" (UniqueName: \"kubernetes.io/projected/1424baf7-9559-4343-819e-a2a31200759f-kube-api-access-6mhn8\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052317 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-audit-policies\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052359 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-service-ca\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052379 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052396 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-error\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052426 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-login\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052454 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1424baf7-9559-4343-819e-a2a31200759f-audit-dir\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052483 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052530 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052543 4656 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74b5802b-b8fb-48d1-8723-2c78386825db-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052553 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmz5l\" (UniqueName: \"kubernetes.io/projected/74b5802b-b8fb-48d1-8723-2c78386825db-kube-api-access-lmz5l\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052562 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052574 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052585 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052593 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052603 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052612 4656 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052621 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052630 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052640 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052649 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.052657 4656 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74b5802b-b8fb-48d1-8723-2c78386825db-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.154885 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155374 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155414 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155451 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155499 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-session\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155542 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mhn8\" (UniqueName: \"kubernetes.io/projected/1424baf7-9559-4343-819e-a2a31200759f-kube-api-access-6mhn8\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155569 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-audit-policies\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155610 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-service-ca\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155636 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155661 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-error\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.156755 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.156758 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-service-ca\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.155697 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-login\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157291 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-audit-policies\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157308 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1424baf7-9559-4343-819e-a2a31200759f-audit-dir\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157355 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157400 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-router-certs\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.157663 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1424baf7-9559-4343-819e-a2a31200759f-audit-dir\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.158749 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.160235 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.160802 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.163191 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.164506 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-login\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.167220 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-user-template-error\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.171021 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.173564 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-session\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.177430 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1424baf7-9559-4343-819e-a2a31200759f-v4-0-config-system-router-certs\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.179547 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mhn8\" (UniqueName: \"kubernetes.io/projected/1424baf7-9559-4343-819e-a2a31200759f-kube-api-access-6mhn8\") pod \"oauth-openshift-85f4f78dc8-wpmgl\" (UID: \"1424baf7-9559-4343-819e-a2a31200759f\") " pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.221211 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.398495 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerStarted","Data":"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.401508 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerStarted","Data":"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.403849 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerStarted","Data":"359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.405689 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerStarted","Data":"10ebf6db7ee2964bf2e7f1a9dc6ff72c579ec55214bcb19505e1b006f8bbcdba"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.409445 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" event={"ID":"74b5802b-b8fb-48d1-8723-2c78386825db","Type":"ContainerDied","Data":"4dc61eaad2ae739a312d7f61027749c556ace7e16aad00f87ef7790d83668fcc"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.409510 4656 scope.go:117] "RemoveContainer" containerID="a3c344d4a99b4cc3c2b785ff5edacf8eec4aa5faf13d43bdf4fe9c80a6160a48" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.409618 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jpgn" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.411984 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerStarted","Data":"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.414104 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerStarted","Data":"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.417148 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d"} Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.433591 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8dc6j" podStartSLOduration=2.578684326 podStartE2EDuration="1m18.433568684s" podCreationTimestamp="2026-01-28 15:21:27 +0000 UTC" firstStartedPulling="2026-01-28 15:21:29.094794316 +0000 UTC m=+179.602965130" lastFinishedPulling="2026-01-28 15:22:44.949678684 +0000 UTC m=+255.457849488" observedRunningTime="2026-01-28 15:22:45.429829858 +0000 UTC m=+255.938000672" watchObservedRunningTime="2026-01-28 15:22:45.433568684 +0000 UTC m=+255.941739498" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.492591 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqqvv" podStartSLOduration=3.866182555 podStartE2EDuration="1m17.492571789s" podCreationTimestamp="2026-01-28 15:21:28 +0000 UTC" firstStartedPulling="2026-01-28 15:21:31.281440752 +0000 UTC m=+181.789611556" lastFinishedPulling="2026-01-28 15:22:44.907829986 +0000 UTC m=+255.416000790" observedRunningTime="2026-01-28 15:22:45.488569786 +0000 UTC m=+255.996740590" watchObservedRunningTime="2026-01-28 15:22:45.492571789 +0000 UTC m=+256.000742593" Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.589338 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl"] Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.639021 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:22:45 crc kubenswrapper[4656]: I0128 15:22:45.644192 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jpgn"] Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.434491 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerID="10ebf6db7ee2964bf2e7f1a9dc6ff72c579ec55214bcb19505e1b006f8bbcdba" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.434572 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerDied","Data":"10ebf6db7ee2964bf2e7f1a9dc6ff72c579ec55214bcb19505e1b006f8bbcdba"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.439397 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerID="94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.439472 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerDied","Data":"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.444655 4656 generic.go:334] "Generic (PLEG): container finished" podID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerID="6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.444746 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerDied","Data":"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.451852 4656 generic.go:334] "Generic (PLEG): container finished" podID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerID="3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.451924 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerDied","Data":"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.455653 4656 generic.go:334] "Generic (PLEG): container finished" podID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerID="a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.455716 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerDied","Data":"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.458890 4656 generic.go:334] "Generic (PLEG): container finished" podID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerID="3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f" exitCode=0 Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.458943 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerDied","Data":"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.473254 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" event={"ID":"1424baf7-9559-4343-819e-a2a31200759f","Type":"ContainerStarted","Data":"b8607571541c31959c59bdccd5cee993218f221122b21d7bec939e2feab7e877"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.473308 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" event={"ID":"1424baf7-9559-4343-819e-a2a31200759f","Type":"ContainerStarted","Data":"306607f83c7b0a1c0f1315734ac758e6780e687fd87c131f4e36d3b541977411"} Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.473523 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.480254 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" Jan 28 15:22:46 crc kubenswrapper[4656]: I0128 15:22:46.594937 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-85f4f78dc8-wpmgl" podStartSLOduration=28.5949138 podStartE2EDuration="28.5949138s" podCreationTimestamp="2026-01-28 15:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:46.592204503 +0000 UTC m=+257.100375307" watchObservedRunningTime="2026-01-28 15:22:46.5949138 +0000 UTC m=+257.103084604" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.194791 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b5802b-b8fb-48d1-8723-2c78386825db" path="/var/lib/kubelet/pods/74b5802b-b8fb-48d1-8723-2c78386825db/volumes" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.479257 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerStarted","Data":"98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.482371 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerStarted","Data":"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.484138 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerStarted","Data":"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.485710 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerStarted","Data":"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.487290 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerStarted","Data":"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.488900 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerStarted","Data":"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855"} Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.508198 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cxc6z" podStartSLOduration=2.7609773029999998 podStartE2EDuration="1m21.508179502s" podCreationTimestamp="2026-01-28 15:21:26 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.066674348 +0000 UTC m=+178.574845152" lastFinishedPulling="2026-01-28 15:22:46.813876547 +0000 UTC m=+257.322047351" observedRunningTime="2026-01-28 15:22:47.506401282 +0000 UTC m=+258.014572076" watchObservedRunningTime="2026-01-28 15:22:47.508179502 +0000 UTC m=+258.016350306" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.543714 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5p48j" podStartSLOduration=4.458827626 podStartE2EDuration="1m23.543692931s" podCreationTimestamp="2026-01-28 15:21:24 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.015825637 +0000 UTC m=+178.523996441" lastFinishedPulling="2026-01-28 15:22:47.100690942 +0000 UTC m=+257.608861746" observedRunningTime="2026-01-28 15:22:47.538768461 +0000 UTC m=+258.046939265" watchObservedRunningTime="2026-01-28 15:22:47.543692931 +0000 UTC m=+258.051863735" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.562436 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w4vpf" podStartSLOduration=4.610898987 podStartE2EDuration="1m23.562419092s" podCreationTimestamp="2026-01-28 15:21:24 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.056096454 +0000 UTC m=+178.564267258" lastFinishedPulling="2026-01-28 15:22:47.007616559 +0000 UTC m=+257.515787363" observedRunningTime="2026-01-28 15:22:47.560025634 +0000 UTC m=+258.068196438" watchObservedRunningTime="2026-01-28 15:22:47.562419092 +0000 UTC m=+258.070589896" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.577542 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gzr9v" podStartSLOduration=3.552805825 podStartE2EDuration="1m23.577523661s" podCreationTimestamp="2026-01-28 15:21:24 +0000 UTC" firstStartedPulling="2026-01-28 15:21:26.931814692 +0000 UTC m=+177.439985496" lastFinishedPulling="2026-01-28 15:22:46.956532528 +0000 UTC m=+257.464703332" observedRunningTime="2026-01-28 15:22:47.575634827 +0000 UTC m=+258.083805631" watchObservedRunningTime="2026-01-28 15:22:47.577523661 +0000 UTC m=+258.085694465" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.597681 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nhqpx" podStartSLOduration=2.807131514 podStartE2EDuration="1m21.597662453s" podCreationTimestamp="2026-01-28 15:21:26 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.036339087 +0000 UTC m=+178.544509891" lastFinishedPulling="2026-01-28 15:22:46.826870026 +0000 UTC m=+257.335040830" observedRunningTime="2026-01-28 15:22:47.595035048 +0000 UTC m=+258.103205852" watchObservedRunningTime="2026-01-28 15:22:47.597662453 +0000 UTC m=+258.105833257" Jan 28 15:22:47 crc kubenswrapper[4656]: I0128 15:22:47.620900 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-szsbj" podStartSLOduration=4.72395908 podStartE2EDuration="1m23.620877262s" podCreationTimestamp="2026-01-28 15:21:24 +0000 UTC" firstStartedPulling="2026-01-28 15:21:28.004900973 +0000 UTC m=+178.513071777" lastFinishedPulling="2026-01-28 15:22:46.901819155 +0000 UTC m=+257.409989959" observedRunningTime="2026-01-28 15:22:47.616489708 +0000 UTC m=+258.124660522" watchObservedRunningTime="2026-01-28 15:22:47.620877262 +0000 UTC m=+258.129048066" Jan 28 15:22:48 crc kubenswrapper[4656]: I0128 15:22:48.237807 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:22:48 crc kubenswrapper[4656]: I0128 15:22:48.237878 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:22:48 crc kubenswrapper[4656]: I0128 15:22:48.643846 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:22:48 crc kubenswrapper[4656]: I0128 15:22:48.645288 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:22:49 crc kubenswrapper[4656]: I0128 15:22:49.480204 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8dc6j" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" probeResult="failure" output=< Jan 28 15:22:49 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:22:49 crc kubenswrapper[4656]: > Jan 28 15:22:49 crc kubenswrapper[4656]: I0128 15:22:49.693068 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqqvv" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" probeResult="failure" output=< Jan 28 15:22:49 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:22:49 crc kubenswrapper[4656]: > Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.722779 4656 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724186 4656 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724370 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724574 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724590 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724676 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724624 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.724729 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" gracePeriod=15 Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726375 4656 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726571 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726583 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726596 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726603 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726610 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726635 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726644 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726649 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.726662 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.726670 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.729198 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729235 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.729248 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729255 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:22:52 crc kubenswrapper[4656]: E0128 15:22:52.729270 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729276 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729490 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729511 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729522 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729530 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729538 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729549 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.729557 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.778689 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781525 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781563 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781602 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781623 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781658 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781699 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781843 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.781876 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882707 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882760 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882795 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882813 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882835 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882852 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882879 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.882902 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883018 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883061 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883085 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883108 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883127 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883145 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883183 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:52 crc kubenswrapper[4656]: I0128 15:22:52.883206 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.074746 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:22:53 crc kubenswrapper[4656]: W0128 15:22:53.094033 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5153f6ea659d9f70f7b880773744e6ad1358b3f30271fc726e3c68b0fcdae782 WatchSource:0}: Error finding container 5153f6ea659d9f70f7b880773744e6ad1358b3f30271fc726e3c68b0fcdae782: Status 404 returned error can't find the container with id 5153f6ea659d9f70f7b880773744e6ad1358b3f30271fc726e3c68b0fcdae782 Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.097889 4656 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eee59cf14b4b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,LastTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.387254 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.388045 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.388408 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.388666 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.389005 4656 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.389105 4656 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.389585 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="200ms" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.540931 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459"} Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.541038 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5153f6ea659d9f70f7b880773744e6ad1358b3f30271fc726e3c68b0fcdae782"} Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.541871 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.544096 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.545610 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546245 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" exitCode=0 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546264 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" exitCode=0 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546272 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" exitCode=0 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546284 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" exitCode=2 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.546355 4656 scope.go:117] "RemoveContainer" containerID="4474c45d44340271f70dd8f489fd9fa3929da23e93d0a0c244fef7c6542b0fc3" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.548484 4656 generic.go:334] "Generic (PLEG): container finished" podID="23d13458-d18f-4e50-bd07-61d18319b4c7" containerID="1b53bded64e844d0eb1de8d68357820c2e6d1bfc568f8f1b250cc8890c0cc777" exitCode=0 Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.548513 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23d13458-d18f-4e50-bd07-61d18319b4c7","Type":"ContainerDied","Data":"1b53bded64e844d0eb1de8d68357820c2e6d1bfc568f8f1b250cc8890c0cc777"} Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.549191 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: I0128 15:22:53.549805 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.590519 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="400ms" Jan 28 15:22:53 crc kubenswrapper[4656]: E0128 15:22:53.991849 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="800ms" Jan 28 15:22:54 crc kubenswrapper[4656]: E0128 15:22:54.145711 4656 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eee59cf14b4b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,LastTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.558251 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.636954 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.637150 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.701750 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.702920 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.703177 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.703389 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: E0128 15:22:54.793852 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="1.6s" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.890853 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.891588 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.892044 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.892324 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.912509 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") pod \"23d13458-d18f-4e50-bd07-61d18319b4c7\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.912971 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") pod \"23d13458-d18f-4e50-bd07-61d18319b4c7\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.913124 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") pod \"23d13458-d18f-4e50-bd07-61d18319b4c7\" (UID: \"23d13458-d18f-4e50-bd07-61d18319b4c7\") " Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.912841 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23d13458-d18f-4e50-bd07-61d18319b4c7" (UID: "23d13458-d18f-4e50-bd07-61d18319b4c7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.913473 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock" (OuterVolumeSpecName: "var-lock") pod "23d13458-d18f-4e50-bd07-61d18319b4c7" (UID: "23d13458-d18f-4e50-bd07-61d18319b4c7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:54 crc kubenswrapper[4656]: I0128 15:22:54.937656 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23d13458-d18f-4e50-bd07-61d18319b4c7" (UID: "23d13458-d18f-4e50-bd07-61d18319b4c7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.014677 4656 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.014706 4656 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23d13458-d18f-4e50-bd07-61d18319b4c7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.014717 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23d13458-d18f-4e50-bd07-61d18319b4c7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.090587 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.091326 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.203078 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.204193 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.204577 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.205194 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.205605 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.244137 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.245081 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.245698 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.246139 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.246865 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.247232 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.247677 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.333385 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.333547 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.333996 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334116 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334225 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334144 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334695 4656 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334814 4656 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.334901 4656 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.548502 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.549449 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.550473 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.550690 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.571627 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23d13458-d18f-4e50-bd07-61d18319b4c7","Type":"ContainerDied","Data":"ecf52d47e7e50c6dd46bf76ed0fc0d2fa9f91e02bebe98c13fefcc025a606204"} Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.571728 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf52d47e7e50c6dd46bf76ed0fc0d2fa9f91e02bebe98c13fefcc025a606204" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.571883 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.578931 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.580593 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581038 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581271 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581507 4656 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" exitCode=0 Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581671 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581761 4656 scope.go:117] "RemoveContainer" containerID="b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.581747 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.582469 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.597493 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.597988 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.598394 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.599413 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.599841 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.625433 4656 scope.go:117] "RemoveContainer" containerID="7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.625530 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.626181 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.626523 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.626849 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.627267 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.627624 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.627812 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.633038 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.633598 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.633882 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.634210 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.635234 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.635654 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.635959 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.636239 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.644270 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.644834 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.645396 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.645862 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.646323 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.646590 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.646844 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.646858 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.647077 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.647744 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.648246 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.648511 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.648750 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.648977 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.649225 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.649552 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.649701 4656 scope.go:117] "RemoveContainer" containerID="0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.670251 4656 scope.go:117] "RemoveContainer" containerID="da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.693004 4656 scope.go:117] "RemoveContainer" containerID="82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.753179 4656 scope.go:117] "RemoveContainer" containerID="7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.775790 4656 scope.go:117] "RemoveContainer" containerID="b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.776438 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\": container with ID starting with b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821 not found: ID does not exist" containerID="b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.776488 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821"} err="failed to get container status \"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\": rpc error: code = NotFound desc = could not find container \"b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821\": container with ID starting with b8579ea780c6b24ada60ea5c51709977299b7da6cb67914790a61442ce20a821 not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.776518 4656 scope.go:117] "RemoveContainer" containerID="7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.777956 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\": container with ID starting with 7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671 not found: ID does not exist" containerID="7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.777985 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671"} err="failed to get container status \"7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\": rpc error: code = NotFound desc = could not find container \"7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671\": container with ID starting with 7d9b9470967b19f3c5568d21b73389aed781a6e53fb01056af1295b49e625671 not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778003 4656 scope.go:117] "RemoveContainer" containerID="0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.778287 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\": container with ID starting with 0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c not found: ID does not exist" containerID="0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778357 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c"} err="failed to get container status \"0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\": rpc error: code = NotFound desc = could not find container \"0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c\": container with ID starting with 0345e914c423fb6f5bfe341f9749505ebf4b3ee0006ff9ebe91c1a430bd2602c not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778377 4656 scope.go:117] "RemoveContainer" containerID="da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.778754 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\": container with ID starting with da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1 not found: ID does not exist" containerID="da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778813 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1"} err="failed to get container status \"da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\": rpc error: code = NotFound desc = could not find container \"da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1\": container with ID starting with da936779aa2a02867b0e8e139faf2373bd374f83b13fe85d6906a6248f3561e1 not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.778853 4656 scope.go:117] "RemoveContainer" containerID="82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.779126 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\": container with ID starting with 82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f not found: ID does not exist" containerID="82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.779193 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f"} err="failed to get container status \"82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\": rpc error: code = NotFound desc = could not find container \"82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f\": container with ID starting with 82b713c8894e1ad92a3726243111d18d39f33642271dd797bdd1e8b981a6f82f not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.779276 4656 scope.go:117] "RemoveContainer" containerID="7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.779991 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\": container with ID starting with 7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197 not found: ID does not exist" containerID="7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197" Jan 28 15:22:55 crc kubenswrapper[4656]: I0128 15:22:55.780021 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197"} err="failed to get container status \"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\": rpc error: code = NotFound desc = could not find container \"7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197\": container with ID starting with 7598cae7a2b57f72c4625ecadaed516a5a441f9f06b11edc6cabf7950cbc2197 not found: ID does not exist" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.876246 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:22:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:22:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:22:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:22:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:80389051bc0ea34449a3ee9b5472446041cb0f2e47fa9d2048010428fa1019ba\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:e9a0654b0e53f31c6f63037d06bc5145dc7b9c46a7ac2d778d473d966efb9e14\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1675675872},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1202349806},{\\\"names\\\":[],\\\"sizeBytes\\\":1187310829},{\\\"names\\\":[],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.877380 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.877914 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.878394 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.878640 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:55 crc kubenswrapper[4656]: E0128 15:22:55.878663 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:22:56 crc kubenswrapper[4656]: E0128 15:22:56.394530 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="3.2s" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.633824 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.634587 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.634844 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.635089 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.635496 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.635727 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.635950 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.636122 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.638413 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.638677 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.638855 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639034 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639243 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639428 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639630 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.639825 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.669478 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.669625 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.710250 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.710949 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.711407 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.711748 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.712048 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.712417 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.712820 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.713117 4656 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:56 crc kubenswrapper[4656]: I0128 15:22:56.713474 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.116559 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.116856 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.179714 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.181551 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.182207 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.182591 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.182954 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.183272 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.183457 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.183667 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.183921 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.184205 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.659541 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.660141 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.660496 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.660737 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.660814 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.661032 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.661393 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.661615 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.661906 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.662215 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.662620 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.662861 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.663092 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.663437 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.663713 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.664575 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.664989 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:57 crc kubenswrapper[4656]: I0128 15:22:57.665181 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.293072 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.293730 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.294096 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.294527 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.294876 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.295069 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.295343 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.296250 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.296683 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.296885 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.337251 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.337708 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.337896 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338065 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338226 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338440 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338678 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.338836 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.339032 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.339298 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.681262 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.682292 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.682643 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.683122 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.683382 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.683595 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.683825 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.684081 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.684343 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.684566 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.684781 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.720440 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.720892 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721293 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721478 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721621 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721797 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.721968 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.722112 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.722298 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.722455 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:58 crc kubenswrapper[4656]: I0128 15:22:58.722604 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:22:59 crc kubenswrapper[4656]: E0128 15:22:59.595036 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="6.4s" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.173567 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.174084 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.174300 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.174470 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.174776 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.175452 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.175712 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.176092 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.176442 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:01 crc kubenswrapper[4656]: I0128 15:23:01.177470 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:04 crc kubenswrapper[4656]: E0128 15:23:04.147099 4656 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eee59cf14b4b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,LastTimestamp:2026-01-28 15:22:53.096924341 +0000 UTC m=+263.605095145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:23:05 crc kubenswrapper[4656]: E0128 15:23:05.173009 4656 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" volumeName="registry-storage" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.661789 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.661866 4656 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c" exitCode=1 Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.661909 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c"} Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.662569 4656 scope.go:117] "RemoveContainer" containerID="08932142792b5b7e1afc60e25e6fb6b092c9c65185a0e407f807d90b1928807c" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.663017 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.663547 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.663750 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.663989 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.664239 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.664461 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.665488 4656 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.666008 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.666267 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.666450 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: I0128 15:23:05.666594 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:05 crc kubenswrapper[4656]: E0128 15:23:05.996093 4656 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="7s" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.093914 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.113878 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:23:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:23:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:23:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:23:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:80389051bc0ea34449a3ee9b5472446041cb0f2e47fa9d2048010428fa1019ba\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:e9a0654b0e53f31c6f63037d06bc5145dc7b9c46a7ac2d778d473d966efb9e14\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1675675872},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1202349806},{\\\"names\\\":[],\\\"sizeBytes\\\":1187310829},{\\\"names\\\":[],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.114862 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.115304 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.115914 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.116533 4656 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.116563 4656 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.170002 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.171038 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.172287 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.172912 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.176321 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.176782 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.177048 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.177258 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.177582 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.177826 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.178076 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.178368 4656 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.191487 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.191531 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.191914 4656 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.192403 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.673682 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.673851 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"128b7d572711eb1bedbcebe7171d9e0ccb731d7abb7b0938208d1595a6627ed0"} Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.676215 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.676596 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.677057 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.677484 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.677860 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.678343 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.678692 4656 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.678954 4656 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4214e887b6e054911c39dc34a9c052ddf9ed1596e5f79a9ee81a9055a585840d" exitCode=0 Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.678993 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4214e887b6e054911c39dc34a9c052ddf9ed1596e5f79a9ee81a9055a585840d"} Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679008 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679022 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3c074a23ef153f447e5ea166f0f0633974aaee9b578f2c42b78227590475bd08"} Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679280 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679334 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679370 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: E0128 15:23:06.679642 4656 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679666 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.679953 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.680486 4656 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.680955 4656 status_manager.go:851] "Failed to get status for pod" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" pod="openshift-marketplace/redhat-marketplace-cxc6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cxc6z\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.681356 4656 status_manager.go:851] "Failed to get status for pod" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.681586 4656 status_manager.go:851] "Failed to get status for pod" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" pod="openshift-marketplace/redhat-operators-zqqvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zqqvv\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.681798 4656 status_manager.go:851] "Failed to get status for pod" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" pod="openshift-marketplace/certified-operators-5p48j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5p48j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.681982 4656 status_manager.go:851] "Failed to get status for pod" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" pod="openshift-marketplace/redhat-operators-8dc6j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8dc6j\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.682193 4656 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.682459 4656 status_manager.go:851] "Failed to get status for pod" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" pod="openshift-marketplace/community-operators-w4vpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-w4vpf\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.682900 4656 status_manager.go:851] "Failed to get status for pod" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" pod="openshift-marketplace/certified-operators-gzr9v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzr9v\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.683262 4656 status_manager.go:851] "Failed to get status for pod" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" pod="openshift-marketplace/redhat-marketplace-nhqpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhqpx\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:06 crc kubenswrapper[4656]: I0128 15:23:06.683604 4656 status_manager.go:851] "Failed to get status for pod" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" pod="openshift-marketplace/community-operators-szsbj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szsbj\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 28 15:23:07 crc kubenswrapper[4656]: I0128 15:23:07.689100 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8f37329f616c78ad784c13eda77eaa88794e0052f7e00cd32d49715a2f9a4cf0"} Jan 28 15:23:07 crc kubenswrapper[4656]: I0128 15:23:07.689524 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"05d03fb362120040d14cb147447c1af0b7f5e4c3498035d023d552051966dc6c"} Jan 28 15:23:07 crc kubenswrapper[4656]: I0128 15:23:07.689541 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c2624c7462adb66d9805bea12ae87c3a87076025db60bee98300588697c6b1c"} Jan 28 15:23:07 crc kubenswrapper[4656]: I0128 15:23:07.689555 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9650d2ad1dc114811a53ef76665ab24eee05a00c56dd02b0e49cd508e6997d1a"} Jan 28 15:23:08 crc kubenswrapper[4656]: I0128 15:23:08.698338 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ec619f3e9442b716a8a88f3821ffe616904d9496df15c693b7995136e969b8fd"} Jan 28 15:23:08 crc kubenswrapper[4656]: I0128 15:23:08.699436 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:08 crc kubenswrapper[4656]: I0128 15:23:08.698724 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:08 crc kubenswrapper[4656]: I0128 15:23:08.699618 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:11 crc kubenswrapper[4656]: I0128 15:23:11.192772 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:11 crc kubenswrapper[4656]: I0128 15:23:11.192830 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:11 crc kubenswrapper[4656]: I0128 15:23:11.200678 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:13 crc kubenswrapper[4656]: I0128 15:23:13.726787 4656 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:13 crc kubenswrapper[4656]: I0128 15:23:13.952921 4656 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f3672fad-4cf4-4873-9065-1100f7b36ed0" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.730230 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.730543 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.733335 4656 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f3672fad-4cf4-4873-9065-1100f7b36ed0" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.736209 4656 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://9650d2ad1dc114811a53ef76665ab24eee05a00c56dd02b0e49cd508e6997d1a" Jan 28 15:23:14 crc kubenswrapper[4656]: I0128 15:23:14.736245 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.261685 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.261826 4656 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.261897 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.734846 4656 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.734876 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ce9a6c7-62ad-4d0e-955e-dcb43dac9226" Jan 28 15:23:15 crc kubenswrapper[4656]: I0128 15:23:15.738061 4656 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f3672fad-4cf4-4873-9065-1100f7b36ed0" Jan 28 15:23:16 crc kubenswrapper[4656]: I0128 15:23:16.093984 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:23 crc kubenswrapper[4656]: I0128 15:23:23.933396 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:23:24 crc kubenswrapper[4656]: I0128 15:23:24.323871 4656 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:23:24 crc kubenswrapper[4656]: I0128 15:23:24.456944 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.120089 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.181059 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.261980 4656 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.262031 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.423338 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.645314 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.698986 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:23:25 crc kubenswrapper[4656]: I0128 15:23:25.948809 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.066581 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.109458 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.122740 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.180586 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.252054 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.336015 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.374129 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.431510 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.538925 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.561483 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.696637 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.805469 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.866244 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:23:26 crc kubenswrapper[4656]: I0128 15:23:26.886274 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.094902 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.148996 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.232495 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.251901 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.338571 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.435694 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.449363 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.472436 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.560951 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:23:27 crc kubenswrapper[4656]: I0128 15:23:27.747745 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.052978 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.235303 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.312222 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.403508 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.503316 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.521351 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.529082 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.543470 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.655532 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.828284 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:23:28 crc kubenswrapper[4656]: I0128 15:23:28.842768 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.032657 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.041299 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.074011 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.175660 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.187593 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.208114 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.216955 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.218393 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.409203 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.409203 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.467568 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.471078 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.487856 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.546951 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.614410 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.719989 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.783346 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.786130 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.805469 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.837354 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.850806 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:23:29 crc kubenswrapper[4656]: I0128 15:23:29.960881 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.030113 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.049774 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.086288 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.163134 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.204917 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.231301 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.368880 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.370950 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.407683 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.425572 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.429823 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.461673 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.465690 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.500460 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.568024 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.604298 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.640559 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.750229 4656 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.773056 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.784562 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.865559 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.888913 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:23:30 crc kubenswrapper[4656]: I0128 15:23:30.928948 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.021661 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.109339 4656 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.243481 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.284708 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.414423 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.474475 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.605698 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.644444 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.764997 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.765954 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.797030 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.819842 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.904499 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.926744 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:23:31 crc kubenswrapper[4656]: I0128 15:23:31.991915 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.000738 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.026457 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.040532 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.123480 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.152852 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.232121 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.315115 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.323195 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.385061 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.441280 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.489021 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.492005 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.536434 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.718037 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.755915 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.780877 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.805757 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.845435 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.902466 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.915584 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.938724 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:23:32 crc kubenswrapper[4656]: I0128 15:23:32.940135 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.087142 4656 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.220648 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.352958 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.434052 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.442709 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.477596 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.510183 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.519210 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.550040 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.559426 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.561662 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.603578 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.610211 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.664554 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.726935 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.781092 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.786280 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.791252 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.802609 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.894604 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.911712 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:23:33 crc kubenswrapper[4656]: I0128 15:23:33.959150 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.179658 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.289914 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.292254 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.350486 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.481250 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.488740 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.493865 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.601387 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.634262 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.669835 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.692402 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.802577 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.856047 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.923964 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:23:34 crc kubenswrapper[4656]: I0128 15:23:34.960011 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.079118 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.254666 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.267851 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.271928 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.276927 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.313537 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.330582 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.369549 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.378369 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.390193 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.434665 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.437117 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.453716 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.528369 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.585354 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.615729 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.658086 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.689269 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.751470 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.892597 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.912020 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:23:35 crc kubenswrapper[4656]: I0128 15:23:35.960083 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.031133 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.063114 4656 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.066648 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.135981 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.231877 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.275969 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.321348 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.464966 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.483471 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.534825 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.564912 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.582954 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.601286 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.609259 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.628040 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.666145 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.805235 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.840988 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.896655 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:23:36 crc kubenswrapper[4656]: I0128 15:23:36.976018 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.018851 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.027859 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.067809 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.081797 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.115201 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.173096 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.205739 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.253094 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.348388 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.430138 4656 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.457326 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.474624 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.573033 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.618035 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.648576 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.667556 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.673275 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.743838 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.791729 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:23:37 crc kubenswrapper[4656]: I0128 15:23:37.920271 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.027880 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.040917 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.106899 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.122132 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.176804 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.285310 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.289462 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.350294 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.356660 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.396016 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.504474 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.513570 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.632210 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.744486 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.752200 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.778151 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.898608 4656 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.901257 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.901222664 podStartE2EDuration="46.901222664s" podCreationTimestamp="2026-01-28 15:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:13.878436321 +0000 UTC m=+284.386607135" watchObservedRunningTime="2026-01-28 15:23:38.901222664 +0000 UTC m=+309.409393478" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.904211 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.904268 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.914087 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:23:38 crc kubenswrapper[4656]: I0128 15:23:38.925958 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.925935404 podStartE2EDuration="25.925935404s" podCreationTimestamp="2026-01-28 15:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:38.924844335 +0000 UTC m=+309.433015149" watchObservedRunningTime="2026-01-28 15:23:38.925935404 +0000 UTC m=+309.434106218" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.028608 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.165893 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.210187 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.443406 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.506938 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.619637 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.707481 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.715209 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.756946 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:23:39 crc kubenswrapper[4656]: I0128 15:23:39.822875 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:23:41 crc kubenswrapper[4656]: I0128 15:23:41.065001 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:23:41 crc kubenswrapper[4656]: I0128 15:23:41.317146 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:23:41 crc kubenswrapper[4656]: I0128 15:23:41.657240 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:23:47 crc kubenswrapper[4656]: I0128 15:23:47.652501 4656 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:23:47 crc kubenswrapper[4656]: I0128 15:23:47.653291 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" gracePeriod=5 Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.224483 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.224909 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334672 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334745 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334828 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334890 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.334907 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.335488 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.335488 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.335525 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.335496 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.352940 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.401076 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.401423 4656 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" exitCode=137 Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.401509 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.401587 4656 scope.go:117] "RemoveContainer" containerID="c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.416737 4656 scope.go:117] "RemoveContainer" containerID="c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" Jan 28 15:23:53 crc kubenswrapper[4656]: E0128 15:23:53.417275 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459\": container with ID starting with c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459 not found: ID does not exist" containerID="c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.417400 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459"} err="failed to get container status \"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459\": rpc error: code = NotFound desc = could not find container \"c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459\": container with ID starting with c43e2448d5e908866a05987afe071d74d5391aa55203662b2d260ae65b3a3459 not found: ID does not exist" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436504 4656 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436538 4656 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436553 4656 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436561 4656 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:53 crc kubenswrapper[4656]: I0128 15:23:53.436569 4656 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.176927 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.177197 4656 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.187129 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.187181 4656 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0d242c0c-b094-4c8d-88b5-e88f0447cf64" Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.191417 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:23:55 crc kubenswrapper[4656]: I0128 15:23:55.191437 4656 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0d242c0c-b094-4c8d-88b5-e88f0447cf64" Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.421410 4656 generic.go:334] "Generic (PLEG): container finished" podID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" exitCode=0 Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.421541 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerDied","Data":"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469"} Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.422669 4656 scope.go:117] "RemoveContainer" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.740453 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:23:56 crc kubenswrapper[4656]: I0128 15:23:56.741508 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:23:57 crc kubenswrapper[4656]: I0128 15:23:57.430596 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerStarted","Data":"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4"} Jan 28 15:23:57 crc kubenswrapper[4656]: I0128 15:23:57.431735 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:23:57 crc kubenswrapper[4656]: I0128 15:23:57.435122 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.184410 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.185024 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" containerID="cri-o://f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf" gracePeriod=30 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.284557 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.284744 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" containerID="cri-o://2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97" gracePeriod=30 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.384469 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.384786 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5p48j" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="registry-server" containerID="cri-o://8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" gracePeriod=2 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.460423 4656 generic.go:334] "Generic (PLEG): container finished" podID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerID="f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf" exitCode=0 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.460507 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" event={"ID":"97f85e75-6682-490f-9f1d-cdf924a67f38","Type":"ContainerDied","Data":"f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf"} Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.462260 4656 generic.go:334] "Generic (PLEG): container finished" podID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerID="2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97" exitCode=0 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.462297 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" event={"ID":"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7","Type":"ContainerDied","Data":"2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97"} Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.704352 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.710022 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.782146 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.782556 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-szsbj" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="registry-server" containerID="cri-o://cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" gracePeriod=2 Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.797046 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.886916 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.886976 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887016 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887051 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") pod \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887068 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887112 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") pod \"97f85e75-6682-490f-9f1d-cdf924a67f38\" (UID: \"97f85e75-6682-490f-9f1d-cdf924a67f38\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887145 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") pod \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887199 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") pod \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.887838 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config" (OuterVolumeSpecName: "config") pod "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.888608 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.888857 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.888877 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config" (OuterVolumeSpecName: "config") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.888921 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca" (OuterVolumeSpecName: "client-ca") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.890845 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") pod \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\" (UID: \"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891314 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891331 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891343 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891352 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.891361 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f85e75-6682-490f-9f1d-cdf924a67f38-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.894380 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6" (OuterVolumeSpecName: "kube-api-access-d7qd6") pod "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7"). InnerVolumeSpecName "kube-api-access-d7qd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.895411 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff" (OuterVolumeSpecName: "kube-api-access-s67ff") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "kube-api-access-s67ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.895670 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" (UID: "a9d5ce28-bfd3-4a89-9339-e2df3378e9d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.895681 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97f85e75-6682-490f-9f1d-cdf924a67f38" (UID: "97f85e75-6682-490f-9f1d-cdf924a67f38"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.992312 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") pod \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.992417 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") pod \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.992526 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") pod \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\" (UID: \"42c5c29d-eebc-40b2-8a6d-a7a592efd69d\") " Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993823 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f85e75-6682-490f-9f1d-cdf924a67f38-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993844 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s67ff\" (UniqueName: \"kubernetes.io/projected/97f85e75-6682-490f-9f1d-cdf924a67f38-kube-api-access-s67ff\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993889 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities" (OuterVolumeSpecName: "utilities") pod "42c5c29d-eebc-40b2-8a6d-a7a592efd69d" (UID: "42c5c29d-eebc-40b2-8a6d-a7a592efd69d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993860 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7qd6\" (UniqueName: \"kubernetes.io/projected/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-kube-api-access-d7qd6\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.993989 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:01 crc kubenswrapper[4656]: I0128 15:24:01.996518 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5" (OuterVolumeSpecName: "kube-api-access-xnkz5") pod "42c5c29d-eebc-40b2-8a6d-a7a592efd69d" (UID: "42c5c29d-eebc-40b2-8a6d-a7a592efd69d"). InnerVolumeSpecName "kube-api-access-xnkz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.055901 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42c5c29d-eebc-40b2-8a6d-a7a592efd69d" (UID: "42c5c29d-eebc-40b2-8a6d-a7a592efd69d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.095601 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.095671 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnkz5\" (UniqueName: \"kubernetes.io/projected/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-kube-api-access-xnkz5\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.095687 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5c29d-eebc-40b2-8a6d-a7a592efd69d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.124070 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.196147 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") pod \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.196233 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") pod \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.197040 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities" (OuterVolumeSpecName: "utilities") pod "d6812603-edd0-45f4-b2b3-6d9ece7e98c2" (UID: "d6812603-edd0-45f4-b2b3-6d9ece7e98c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.204319 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") pod \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\" (UID: \"d6812603-edd0-45f4-b2b3-6d9ece7e98c2\") " Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.204758 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.208094 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt" (OuterVolumeSpecName: "kube-api-access-nj8vt") pod "d6812603-edd0-45f4-b2b3-6d9ece7e98c2" (UID: "d6812603-edd0-45f4-b2b3-6d9ece7e98c2"). InnerVolumeSpecName "kube-api-access-nj8vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.238681 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6812603-edd0-45f4-b2b3-6d9ece7e98c2" (UID: "d6812603-edd0-45f4-b2b3-6d9ece7e98c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.306472 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj8vt\" (UniqueName: \"kubernetes.io/projected/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-kube-api-access-nj8vt\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.306501 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6812603-edd0-45f4-b2b3-6d9ece7e98c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471731 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerID="cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" exitCode=0 Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471780 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerDied","Data":"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471813 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szsbj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471857 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szsbj" event={"ID":"d6812603-edd0-45f4-b2b3-6d9ece7e98c2","Type":"ContainerDied","Data":"924a066d9ecea51a84946c62ff66f97e3aaf6278887d64d1b9553585e9692514"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.471896 4656 scope.go:117] "RemoveContainer" containerID="cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.476266 4656 generic.go:334] "Generic (PLEG): container finished" podID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerID="8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" exitCode=0 Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.476335 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerDied","Data":"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.476361 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p48j" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.476365 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p48j" event={"ID":"42c5c29d-eebc-40b2-8a6d-a7a592efd69d","Type":"ContainerDied","Data":"97e67bc73ce17661993af527d6be763d8d88936ad8acbd056a6ba60090f1140e"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.482800 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.482969 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5" event={"ID":"a9d5ce28-bfd3-4a89-9339-e2df3378e9d7","Type":"ContainerDied","Data":"8331e16235e5a94b4f704932116317a3f82336aecaf9afbef2cef9053d0f0822"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.486014 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" event={"ID":"97f85e75-6682-490f-9f1d-cdf924a67f38","Type":"ContainerDied","Data":"c4a2df154e1c3edead1c41618548fc41c7c90a8f88ea7c9973c02acba56d736c"} Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.486114 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l99lt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.497809 4656 scope.go:117] "RemoveContainer" containerID="94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.609551 4656 scope.go:117] "RemoveContainer" containerID="b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.618663 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.625291 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-szsbj"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.628439 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.633557 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l99lt"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.639966 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.640775 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5p48j"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.649475 4656 scope.go:117] "RemoveContainer" containerID="cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.649692 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.649999 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723\": container with ID starting with cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723 not found: ID does not exist" containerID="cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.650034 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723"} err="failed to get container status \"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723\": rpc error: code = NotFound desc = could not find container \"cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723\": container with ID starting with cdb2da8db7544c0e488ff567974403949c36584123e337c21e60e84527ea3723 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.650056 4656 scope.go:117] "RemoveContainer" containerID="94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.653289 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2\": container with ID starting with 94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2 not found: ID does not exist" containerID="94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.653453 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2"} err="failed to get container status \"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2\": rpc error: code = NotFound desc = could not find container \"94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2\": container with ID starting with 94edba98989aee971f8fbf32105d65508b6fbf001f3250016a2a94d92ed527d2 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.653559 4656 scope.go:117] "RemoveContainer" containerID="b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.656216 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4twj5"] Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.656356 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414\": container with ID starting with b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414 not found: ID does not exist" containerID="b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.656381 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414"} err="failed to get container status \"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414\": rpc error: code = NotFound desc = could not find container \"b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414\": container with ID starting with b6a8f551e49997d0dd6375bba6fd9bbfbafd79d99d1be9bf6eab85e93fe8b414 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.656402 4656 scope.go:117] "RemoveContainer" containerID="8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.660670 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.661463 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="extract-utilities" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.661540 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="extract-utilities" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.661687 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.661762 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.661840 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" containerName="installer" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.661909 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" containerName="installer" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.661977 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="extract-content" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662033 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="extract-content" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662092 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662280 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662368 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="extract-utilities" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662626 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="extract-utilities" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662699 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662767 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662841 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="extract-content" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.662928 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="extract-content" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.662999 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.663092 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.663279 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.663367 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.663908 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" containerName="route-controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664009 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664099 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664179 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="23d13458-d18f-4e50-bd07-61d18319b4c7" containerName="installer" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664255 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" containerName="controller-manager" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664316 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" containerName="registry-server" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.664913 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.667099 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.667808 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.677841 4656 scope.go:117] "RemoveContainer" containerID="6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.690450 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.692340 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.693115 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.695740 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.697675 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.697738 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.698378 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.698496 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.698620 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.699074 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.699376 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.699566 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.699932 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.704900 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.707797 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.711884 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.711968 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712001 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712045 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712070 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712095 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712171 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712219 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.712258 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.721502 4656 scope.go:117] "RemoveContainer" containerID="cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745053 4656 scope.go:117] "RemoveContainer" containerID="8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.745500 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c\": container with ID starting with 8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c not found: ID does not exist" containerID="8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745527 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c"} err="failed to get container status \"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c\": rpc error: code = NotFound desc = could not find container \"8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c\": container with ID starting with 8643c4c9721799cbf4e1405528106f4d556e60626a98ab5d7e60a3845afeb44c not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745550 4656 scope.go:117] "RemoveContainer" containerID="6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.745744 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2\": container with ID starting with 6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2 not found: ID does not exist" containerID="6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745772 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2"} err="failed to get container status \"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2\": rpc error: code = NotFound desc = could not find container \"6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2\": container with ID starting with 6a7923cd8bac98c8eafadfac1fbd4fd8b90be7ea4e2521e51c4ac6739d7254f2 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.745786 4656 scope.go:117] "RemoveContainer" containerID="cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98" Jan 28 15:24:02 crc kubenswrapper[4656]: E0128 15:24:02.745981 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98\": container with ID starting with cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98 not found: ID does not exist" containerID="cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.746002 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98"} err="failed to get container status \"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98\": rpc error: code = NotFound desc = could not find container \"cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98\": container with ID starting with cd57508222f02117637d6fbdf45e0f2ae514b13cb69b5745ffd02ed6bd0ced98 not found: ID does not exist" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.746017 4656 scope.go:117] "RemoveContainer" containerID="2ccc89aaa2aabb8a9e2f194a6069859e93e63c7aba50492e992ba9629461da97" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.759656 4656 scope.go:117] "RemoveContainer" containerID="f0551be6db9554b1b88c01d840464311627da50873cfe249dfc47e8c1d6604bf" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813761 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813819 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813867 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813919 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.814941 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.813946 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815044 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815061 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815126 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815083 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.815580 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.816203 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.818057 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.818408 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.819507 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.825832 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.833488 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") pod \"controller-manager-6bf9bfbb56-667b4\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:02 crc kubenswrapper[4656]: I0128 15:24:02.841987 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") pod \"route-controller-manager-6c967b544-dfmnj\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.003418 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.010998 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.049813 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.055225 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.185711 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c5c29d-eebc-40b2-8a6d-a7a592efd69d" path="/var/lib/kubelet/pods/42c5c29d-eebc-40b2-8a6d-a7a592efd69d/volumes" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.186847 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f85e75-6682-490f-9f1d-cdf924a67f38" path="/var/lib/kubelet/pods/97f85e75-6682-490f-9f1d-cdf924a67f38/volumes" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.188450 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d5ce28-bfd3-4a89-9339-e2df3378e9d7" path="/var/lib/kubelet/pods/a9d5ce28-bfd3-4a89-9339-e2df3378e9d7/volumes" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.189607 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6812603-edd0-45f4-b2b3-6d9ece7e98c2" path="/var/lib/kubelet/pods/d6812603-edd0-45f4-b2b3-6d9ece7e98c2/volumes" Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.341376 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:03 crc kubenswrapper[4656]: W0128 15:24:03.352745 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d2dc2b5_f3f8_4be6_87ec_08e25875d581.slice/crio-97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89 WatchSource:0}: Error finding container 97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89: Status 404 returned error can't find the container with id 97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89 Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.539458 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" event={"ID":"7d2dc2b5-f3f8-4be6-87ec-08e25875d581","Type":"ContainerStarted","Data":"97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89"} Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.600394 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.979926 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:24:03 crc kubenswrapper[4656]: I0128 15:24:03.980669 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cxc6z" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="registry-server" containerID="cri-o://98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb" gracePeriod=2 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.182692 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.183072 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqqvv" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" containerID="cri-o://359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e" gracePeriod=2 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.601521 4656 generic.go:334] "Generic (PLEG): container finished" podID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerID="359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e" exitCode=0 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.601860 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerDied","Data":"359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.633139 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerID="98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb" exitCode=0 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.633245 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerDied","Data":"98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.634959 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" event={"ID":"7d2dc2b5-f3f8-4be6-87ec-08e25875d581","Type":"ContainerStarted","Data":"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.635173 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerName="controller-manager" containerID="cri-o://f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" gracePeriod=30 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.635696 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.670632 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.670936 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" event={"ID":"986d7566-ac0b-4651-b2ea-3a839466d5be","Type":"ContainerStarted","Data":"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.670966 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" event={"ID":"986d7566-ac0b-4651-b2ea-3a839466d5be","Type":"ContainerStarted","Data":"e1f2c684d015240031f9a01efcbd97b6607151f5966c4bcd7cb98bb3dac1e473"} Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.671282 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" containerID="cri-o://07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" gracePeriod=30 Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.671851 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.680748 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.785513 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" podStartSLOduration=3.785477943 podStartE2EDuration="3.785477943s" podCreationTimestamp="2026-01-28 15:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:04.672215647 +0000 UTC m=+335.180386471" watchObservedRunningTime="2026-01-28 15:24:04.785477943 +0000 UTC m=+335.293648747" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.819697 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" podStartSLOduration=3.81967182 podStartE2EDuration="3.81967182s" podCreationTimestamp="2026-01-28 15:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:04.817502169 +0000 UTC m=+335.325672983" watchObservedRunningTime="2026-01-28 15:24:04.81967182 +0000 UTC m=+335.327842624" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.872975 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") pod \"e4ed5142-92c2-4f59-a383-f91999ce3dff\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.873076 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") pod \"e4ed5142-92c2-4f59-a383-f91999ce3dff\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.873127 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") pod \"e4ed5142-92c2-4f59-a383-f91999ce3dff\" (UID: \"e4ed5142-92c2-4f59-a383-f91999ce3dff\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.876456 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities" (OuterVolumeSpecName: "utilities") pod "e4ed5142-92c2-4f59-a383-f91999ce3dff" (UID: "e4ed5142-92c2-4f59-a383-f91999ce3dff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.878844 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.890653 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5" (OuterVolumeSpecName: "kube-api-access-lm8p5") pod "e4ed5142-92c2-4f59-a383-f91999ce3dff" (UID: "e4ed5142-92c2-4f59-a383-f91999ce3dff"). InnerVolumeSpecName "kube-api-access-lm8p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.901907 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4ed5142-92c2-4f59-a383-f91999ce3dff" (UID: "e4ed5142-92c2-4f59-a383-f91999ce3dff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.973960 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") pod \"d4371d7c-f72d-4765-9101-34946d11d0e7\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974389 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") pod \"d4371d7c-f72d-4765-9101-34946d11d0e7\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974505 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") pod \"d4371d7c-f72d-4765-9101-34946d11d0e7\" (UID: \"d4371d7c-f72d-4765-9101-34946d11d0e7\") " Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974805 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm8p5\" (UniqueName: \"kubernetes.io/projected/e4ed5142-92c2-4f59-a383-f91999ce3dff-kube-api-access-lm8p5\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974884 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.974980 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4ed5142-92c2-4f59-a383-f91999ce3dff-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.978757 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities" (OuterVolumeSpecName: "utilities") pod "d4371d7c-f72d-4765-9101-34946d11d0e7" (UID: "d4371d7c-f72d-4765-9101-34946d11d0e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4656]: I0128 15:24:04.979663 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb" (OuterVolumeSpecName: "kube-api-access-z4wrb") pod "d4371d7c-f72d-4765-9101-34946d11d0e7" (UID: "d4371d7c-f72d-4765-9101-34946d11d0e7"). InnerVolumeSpecName "kube-api-access-z4wrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.075843 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4wrb\" (UniqueName: \"kubernetes.io/projected/d4371d7c-f72d-4765-9101-34946d11d0e7-kube-api-access-z4wrb\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.075925 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.125855 4656 patch_prober.go:28] interesting pod/route-controller-manager-6c967b544-dfmnj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:42858->10.217.0.58:8443: read: connection reset by peer" start-of-body= Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.126216 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:42858->10.217.0.58:8443: read: connection reset by peer" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.126696 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.144852 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4371d7c-f72d-4765-9101-34946d11d0e7" (UID: "d4371d7c-f72d-4765-9101-34946d11d0e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.176875 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.176967 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.177006 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.177021 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.177066 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") pod \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\" (UID: \"7d2dc2b5-f3f8-4be6-87ec-08e25875d581\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.177390 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4371d7c-f72d-4765-9101-34946d11d0e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.181379 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.181436 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config" (OuterVolumeSpecName: "config") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.181786 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.185104 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.185298 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc" (OuterVolumeSpecName: "kube-api-access-25kqc") pod "7d2dc2b5-f3f8-4be6-87ec-08e25875d581" (UID: "7d2dc2b5-f3f8-4be6-87ec-08e25875d581"). InnerVolumeSpecName "kube-api-access-25kqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279249 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279494 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279576 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279780 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.279883 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25kqc\" (UniqueName: \"kubernetes.io/projected/7d2dc2b5-f3f8-4be6-87ec-08e25875d581-kube-api-access-25kqc\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.411724 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c967b544-dfmnj_986d7566-ac0b-4651-b2ea-3a839466d5be/route-controller-manager/0.log" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.411849 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.481545 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") pod \"986d7566-ac0b-4651-b2ea-3a839466d5be\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.481840 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") pod \"986d7566-ac0b-4651-b2ea-3a839466d5be\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.481949 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") pod \"986d7566-ac0b-4651-b2ea-3a839466d5be\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.482124 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") pod \"986d7566-ac0b-4651-b2ea-3a839466d5be\" (UID: \"986d7566-ac0b-4651-b2ea-3a839466d5be\") " Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.482668 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca" (OuterVolumeSpecName: "client-ca") pod "986d7566-ac0b-4651-b2ea-3a839466d5be" (UID: "986d7566-ac0b-4651-b2ea-3a839466d5be"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.483247 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config" (OuterVolumeSpecName: "config") pod "986d7566-ac0b-4651-b2ea-3a839466d5be" (UID: "986d7566-ac0b-4651-b2ea-3a839466d5be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.490607 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "986d7566-ac0b-4651-b2ea-3a839466d5be" (UID: "986d7566-ac0b-4651-b2ea-3a839466d5be"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.491334 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c" (OuterVolumeSpecName: "kube-api-access-lqf4c") pod "986d7566-ac0b-4651-b2ea-3a839466d5be" (UID: "986d7566-ac0b-4651-b2ea-3a839466d5be"). InnerVolumeSpecName "kube-api-access-lqf4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.583205 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqf4c\" (UniqueName: \"kubernetes.io/projected/986d7566-ac0b-4651-b2ea-3a839466d5be-kube-api-access-lqf4c\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.583237 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.583252 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986d7566-ac0b-4651-b2ea-3a839466d5be-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.583260 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986d7566-ac0b-4651-b2ea-3a839466d5be-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.678560 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqvv" event={"ID":"d4371d7c-f72d-4765-9101-34946d11d0e7","Type":"ContainerDied","Data":"212e83a1856ef8c98ccafed8d21edbd6de5e1d834be4a94f2e2855ab3ac7c760"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.678679 4656 scope.go:117] "RemoveContainer" containerID="359fdaf31fe6b20b24349fbcd0fa88495f66725f41848eea6762b2e197b6023e" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.678906 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqvv" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.683526 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxc6z" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.683515 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxc6z" event={"ID":"e4ed5142-92c2-4f59-a383-f91999ce3dff","Type":"ContainerDied","Data":"675ef5c857ca65a9f9f28b7fc7e7d9027ae949f4a6695cc47226b6c49738e9b9"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.686427 4656 generic.go:334] "Generic (PLEG): container finished" podID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerID="f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" exitCode=0 Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.686646 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.686602 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" event={"ID":"7d2dc2b5-f3f8-4be6-87ec-08e25875d581","Type":"ContainerDied","Data":"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.686720 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bf9bfbb56-667b4" event={"ID":"7d2dc2b5-f3f8-4be6-87ec-08e25875d581","Type":"ContainerDied","Data":"97393666ef37592c4bfc65dc644d07d0966a66451e62aa8f8cf1cef3bd49cf89"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692325 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c967b544-dfmnj_986d7566-ac0b-4651-b2ea-3a839466d5be/route-controller-manager/0.log" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692389 4656 generic.go:334] "Generic (PLEG): container finished" podID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerID="07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" exitCode=255 Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692431 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" event={"ID":"986d7566-ac0b-4651-b2ea-3a839466d5be","Type":"ContainerDied","Data":"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692458 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" event={"ID":"986d7566-ac0b-4651-b2ea-3a839466d5be","Type":"ContainerDied","Data":"e1f2c684d015240031f9a01efcbd97b6607151f5966c4bcd7cb98bb3dac1e473"} Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.692437 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.701979 4656 scope.go:117] "RemoveContainer" containerID="5b4233be7c7686331f99bc85bff4b844e11a65ed773671095a89af9d01cb614c" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.707700 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.712690 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqqvv"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.719022 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.721150 4656 scope.go:117] "RemoveContainer" containerID="1c47ef6d923405e3cbd5ebf98dc7b072df4b7f29ebc5c89b0c18a12b52617312" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.724213 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxc6z"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.732660 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.738307 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6bf9bfbb56-667b4"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.746068 4656 scope.go:117] "RemoveContainer" containerID="98e5a5ee9e3fbdbc98cc8d8842a89ca56d4d33d594c2125a555bfd41937a3fbb" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.747728 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.754490 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c967b544-dfmnj"] Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.759219 4656 scope.go:117] "RemoveContainer" containerID="10ebf6db7ee2964bf2e7f1a9dc6ff72c579ec55214bcb19505e1b006f8bbcdba" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.775830 4656 scope.go:117] "RemoveContainer" containerID="7cabac754976a0ee49a3872bb0095429cd22f9300449f019606cabdd291f04c7" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.790005 4656 scope.go:117] "RemoveContainer" containerID="f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.803434 4656 scope.go:117] "RemoveContainer" containerID="f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" Jan 28 15:24:05 crc kubenswrapper[4656]: E0128 15:24:05.804152 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7\": container with ID starting with f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7 not found: ID does not exist" containerID="f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.804220 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7"} err="failed to get container status \"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7\": rpc error: code = NotFound desc = could not find container \"f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7\": container with ID starting with f080c4465122791c8e433782c814044b76f1491179bef7ad37dff372e5340fc7 not found: ID does not exist" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.804253 4656 scope.go:117] "RemoveContainer" containerID="07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.817000 4656 scope.go:117] "RemoveContainer" containerID="07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" Jan 28 15:24:05 crc kubenswrapper[4656]: E0128 15:24:05.817851 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53\": container with ID starting with 07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53 not found: ID does not exist" containerID="07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53" Jan 28 15:24:05 crc kubenswrapper[4656]: I0128 15:24:05.817892 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53"} err="failed to get container status \"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53\": rpc error: code = NotFound desc = could not find container \"07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53\": container with ID starting with 07b9d2c8ed4a3522fa807e9a1c24ac1d983e0d4b17c78e8e7e6a26c706585a53 not found: ID does not exist" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.177608 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" path="/var/lib/kubelet/pods/7d2dc2b5-f3f8-4be6-87ec-08e25875d581/volumes" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.178330 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" path="/var/lib/kubelet/pods/986d7566-ac0b-4651-b2ea-3a839466d5be/volumes" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.178777 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" path="/var/lib/kubelet/pods/d4371d7c-f72d-4765-9101-34946d11d0e7/volumes" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.179328 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" path="/var/lib/kubelet/pods/e4ed5142-92c2-4f59-a383-f91999ce3dff/volumes" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.614658 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.614987 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615010 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615026 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="extract-utilities" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615032 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="extract-utilities" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615039 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="extract-content" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615045 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="extract-content" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615064 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615073 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615083 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615089 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615102 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="extract-utilities" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615107 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="extract-utilities" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615114 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="extract-content" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615119 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="extract-content" Jan 28 15:24:07 crc kubenswrapper[4656]: E0128 15:24:07.615128 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerName="controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615134 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerName="controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615268 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d2dc2b5-f3f8-4be6-87ec-08e25875d581" containerName="controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615287 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="986d7566-ac0b-4651-b2ea-3a839466d5be" containerName="route-controller-manager" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615306 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4371d7c-f72d-4765-9101-34946d11d0e7" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615317 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ed5142-92c2-4f59-a383-f91999ce3dff" containerName="registry-server" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.615833 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.618877 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r"] Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.619259 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.619731 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.619858 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.620118 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.620448 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.621454 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.621891 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.622721 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.623034 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.623307 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.623406 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.623729 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.631092 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.632044 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.636049 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.639535 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r"] Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.743814 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.743894 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-config\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.743949 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.743980 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-client-ca\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744005 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f93ed009-2548-461a-b6c8-e8cf2349fe79-serving-cert\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744027 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744042 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744071 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmjm\" (UniqueName: \"kubernetes.io/projected/f93ed009-2548-461a-b6c8-e8cf2349fe79-kube-api-access-xxmjm\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.744090 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845690 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845767 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-client-ca\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845800 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f93ed009-2548-461a-b6c8-e8cf2349fe79-serving-cert\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845820 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845848 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845890 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmjm\" (UniqueName: \"kubernetes.io/projected/f93ed009-2548-461a-b6c8-e8cf2349fe79-kube-api-access-xxmjm\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845922 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.845969 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.846001 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-config\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847205 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847288 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847639 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-config\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847642 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f93ed009-2548-461a-b6c8-e8cf2349fe79-client-ca\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.847884 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.851844 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f93ed009-2548-461a-b6c8-e8cf2349fe79-serving-cert\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.852877 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.866862 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmjm\" (UniqueName: \"kubernetes.io/projected/f93ed009-2548-461a-b6c8-e8cf2349fe79-kube-api-access-xxmjm\") pod \"route-controller-manager-6f648798f8-bff7r\" (UID: \"f93ed009-2548-461a-b6c8-e8cf2349fe79\") " pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.875232 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") pod \"controller-manager-59c7bcfdd9-77tzj\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.939716 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:07 crc kubenswrapper[4656]: I0128 15:24:07.947310 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.261618 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r"] Jan 28 15:24:08 crc kubenswrapper[4656]: W0128 15:24:08.291554 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf93ed009_2548_461a_b6c8_e8cf2349fe79.slice/crio-b81821f8d2073506400e20e4e2cb986218874ad389e86710a758010d96e477a3 WatchSource:0}: Error finding container b81821f8d2073506400e20e4e2cb986218874ad389e86710a758010d96e477a3: Status 404 returned error can't find the container with id b81821f8d2073506400e20e4e2cb986218874ad389e86710a758010d96e477a3 Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.483921 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.758810 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" event={"ID":"f93ed009-2548-461a-b6c8-e8cf2349fe79","Type":"ContainerStarted","Data":"f60bdb5890ca2ef0bdced1158621650808c2caf96eeaec77201ab6f46133d2e4"} Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.758866 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" event={"ID":"f93ed009-2548-461a-b6c8-e8cf2349fe79","Type":"ContainerStarted","Data":"b81821f8d2073506400e20e4e2cb986218874ad389e86710a758010d96e477a3"} Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.759222 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.761118 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" event={"ID":"ed0b9628-0338-430a-83bc-0d22461d043c","Type":"ContainerStarted","Data":"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2"} Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.761145 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" event={"ID":"ed0b9628-0338-430a-83bc-0d22461d043c","Type":"ContainerStarted","Data":"4bc6589f9eb7f25b6e2f6f6b82c40c2f6d98d2ea31224a268facb96b14b7970f"} Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.761802 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.766578 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:08 crc kubenswrapper[4656]: I0128 15:24:08.855839 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" podStartSLOduration=5.855810355 podStartE2EDuration="5.855810355s" podCreationTimestamp="2026-01-28 15:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:08.821642118 +0000 UTC m=+339.329812922" watchObservedRunningTime="2026-01-28 15:24:08.855810355 +0000 UTC m=+339.363981159" Jan 28 15:24:09 crc kubenswrapper[4656]: I0128 15:24:09.310065 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f648798f8-bff7r" Jan 28 15:24:09 crc kubenswrapper[4656]: I0128 15:24:09.332605 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" podStartSLOduration=6.332579848 podStartE2EDuration="6.332579848s" podCreationTimestamp="2026-01-28 15:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:08.873945528 +0000 UTC m=+339.382116342" watchObservedRunningTime="2026-01-28 15:24:09.332579848 +0000 UTC m=+339.840750672" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.184418 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.185281 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" containerName="controller-manager" containerID="cri-o://1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" gracePeriod=30 Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.663098 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.821267 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.821332 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.822337 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config" (OuterVolumeSpecName: "config") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.822407 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.822444 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.822768 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") pod \"ed0b9628-0338-430a-83bc-0d22461d043c\" (UID: \"ed0b9628-0338-430a-83bc-0d22461d043c\") " Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.823184 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.823584 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca" (OuterVolumeSpecName: "client-ca") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.823799 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.827311 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.828505 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9" (OuterVolumeSpecName: "kube-api-access-lzsm9") pod "ed0b9628-0338-430a-83bc-0d22461d043c" (UID: "ed0b9628-0338-430a-83bc-0d22461d043c"). InnerVolumeSpecName "kube-api-access-lzsm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.833771 4656 generic.go:334] "Generic (PLEG): container finished" podID="ed0b9628-0338-430a-83bc-0d22461d043c" containerID="1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" exitCode=0 Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.833821 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" event={"ID":"ed0b9628-0338-430a-83bc-0d22461d043c","Type":"ContainerDied","Data":"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2"} Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.833877 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" event={"ID":"ed0b9628-0338-430a-83bc-0d22461d043c","Type":"ContainerDied","Data":"4bc6589f9eb7f25b6e2f6f6b82c40c2f6d98d2ea31224a268facb96b14b7970f"} Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.833897 4656 scope.go:117] "RemoveContainer" containerID="1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.834071 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.872659 4656 scope.go:117] "RemoveContainer" containerID="1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" Jan 28 15:24:21 crc kubenswrapper[4656]: E0128 15:24:21.873874 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2\": container with ID starting with 1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2 not found: ID does not exist" containerID="1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.873915 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2"} err="failed to get container status \"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2\": rpc error: code = NotFound desc = could not find container \"1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2\": container with ID starting with 1ea1a3eac6ef11c2f5fa868015fd48ffef131b908cc28c3baf5f6684779dcda2 not found: ID does not exist" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.894757 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.921441 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59c7bcfdd9-77tzj"] Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.928718 4656 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed0b9628-0338-430a-83bc-0d22461d043c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.928761 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzsm9\" (UniqueName: \"kubernetes.io/projected/ed0b9628-0338-430a-83bc-0d22461d043c-kube-api-access-lzsm9\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.928774 4656 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:21 crc kubenswrapper[4656]: I0128 15:24:21.928783 4656 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed0b9628-0338-430a-83bc-0d22461d043c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.630569 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv"] Jan 28 15:24:22 crc kubenswrapper[4656]: E0128 15:24:22.631203 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" containerName="controller-manager" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.631245 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" containerName="controller-manager" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.631378 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" containerName="controller-manager" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.631868 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636038 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636218 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636562 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636759 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.636655 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638267 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-proxy-ca-bundles\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638440 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e43dd75-be02-4340-baf4-028a9587f06f-serving-cert\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638582 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-client-ca\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrgm\" (UniqueName: \"kubernetes.io/projected/7e43dd75-be02-4340-baf4-028a9587f06f-kube-api-access-mzrgm\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638912 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-config\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.638646 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.645254 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.647478 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv"] Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.739779 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-proxy-ca-bundles\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.740073 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e43dd75-be02-4340-baf4-028a9587f06f-serving-cert\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.740252 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-client-ca\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.740390 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzrgm\" (UniqueName: \"kubernetes.io/projected/7e43dd75-be02-4340-baf4-028a9587f06f-kube-api-access-mzrgm\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.740578 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-config\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.741038 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-proxy-ca-bundles\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.741674 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-client-ca\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.742227 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e43dd75-be02-4340-baf4-028a9587f06f-config\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.761318 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e43dd75-be02-4340-baf4-028a9587f06f-serving-cert\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.762381 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzrgm\" (UniqueName: \"kubernetes.io/projected/7e43dd75-be02-4340-baf4-028a9587f06f-kube-api-access-mzrgm\") pod \"controller-manager-7bd68bb8d6-85gqv\" (UID: \"7e43dd75-be02-4340-baf4-028a9587f06f\") " pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:22 crc kubenswrapper[4656]: I0128 15:24:22.949495 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.181021 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0b9628-0338-430a-83bc-0d22461d043c" path="/var/lib/kubelet/pods/ed0b9628-0338-430a-83bc-0d22461d043c/volumes" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.366685 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv"] Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.404191 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pml8w"] Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.405063 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.432976 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pml8w"] Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549266 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-certificates\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549433 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-trusted-ca\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549481 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-bound-sa-token\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549564 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549622 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549671 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549717 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-tls\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.549801 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxjf\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-kube-api-access-sjxjf\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.570441 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.650784 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-certificates\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651621 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-trusted-ca\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651725 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-bound-sa-token\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651775 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651862 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651897 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-tls\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.651933 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjxjf\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-kube-api-access-sjxjf\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.652547 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.652974 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-trusted-ca\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.653349 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-certificates\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.656667 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.656839 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-registry-tls\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.677870 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-bound-sa-token\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.686035 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjxjf\" (UniqueName: \"kubernetes.io/projected/9c8e92c4-99e6-4a52-b35c-6d2cf9abba12-kube-api-access-sjxjf\") pod \"image-registry-66df7c8f76-pml8w\" (UID: \"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12\") " pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.721369 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.860267 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" event={"ID":"7e43dd75-be02-4340-baf4-028a9587f06f","Type":"ContainerStarted","Data":"0cf4ac66d4b5a3a96e428436b7b84c537349cc6f183dd94033321f0a334c237a"} Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.860586 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" event={"ID":"7e43dd75-be02-4340-baf4-028a9587f06f","Type":"ContainerStarted","Data":"ec569fbc46cfdc5054fcc46ef0a44a9bbdf1e8242676628bd1b4ee1ac69f73e9"} Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.862662 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.868818 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" Jan 28 15:24:23 crc kubenswrapper[4656]: I0128 15:24:23.897480 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bd68bb8d6-85gqv" podStartSLOduration=2.897461188 podStartE2EDuration="2.897461188s" podCreationTimestamp="2026-01-28 15:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:23.895994537 +0000 UTC m=+354.404165341" watchObservedRunningTime="2026-01-28 15:24:23.897461188 +0000 UTC m=+354.405631992" Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.009353 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pml8w"] Jan 28 15:24:24 crc kubenswrapper[4656]: W0128 15:24:24.015883 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c8e92c4_99e6_4a52_b35c_6d2cf9abba12.slice/crio-81e8df40c3f05e32ea69dace326b1b31ae272b669ab024b1c60a4ca80e8b5ea4 WatchSource:0}: Error finding container 81e8df40c3f05e32ea69dace326b1b31ae272b669ab024b1c60a4ca80e8b5ea4: Status 404 returned error can't find the container with id 81e8df40c3f05e32ea69dace326b1b31ae272b669ab024b1c60a4ca80e8b5ea4 Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.870512 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" event={"ID":"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12","Type":"ContainerStarted","Data":"a0a025077952ef29c41d64e6720f6371e9d5e48c59042d94854b562a3237262e"} Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.870989 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" event={"ID":"9c8e92c4-99e6-4a52-b35c-6d2cf9abba12","Type":"ContainerStarted","Data":"81e8df40c3f05e32ea69dace326b1b31ae272b669ab024b1c60a4ca80e8b5ea4"} Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.871027 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:24 crc kubenswrapper[4656]: I0128 15:24:24.895467 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" podStartSLOduration=1.8954350899999999 podStartE2EDuration="1.89543509s" podCreationTimestamp="2026-01-28 15:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:24.89048555 +0000 UTC m=+355.398656354" watchObservedRunningTime="2026-01-28 15:24:24.89543509 +0000 UTC m=+355.403605894" Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.860139 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.860939 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gzr9v" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="registry-server" containerID="cri-o://891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.898277 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.899021 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w4vpf" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="registry-server" containerID="cri-o://10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.922494 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.922839 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" containerID="cri-o://38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.926482 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.927050 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nhqpx" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="registry-server" containerID="cri-o://78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.936197 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.936804 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8dc6j" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" containerID="cri-o://f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" gracePeriod=30 Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.945573 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xzq6"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.946718 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.965738 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xzq6"] Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.981438 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vscn\" (UniqueName: \"kubernetes.io/projected/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-kube-api-access-4vscn\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.981529 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:39 crc kubenswrapper[4656]: I0128 15:24:39.981609 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.082533 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vscn\" (UniqueName: \"kubernetes.io/projected/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-kube-api-access-4vscn\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.082573 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.082603 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.084958 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.094864 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.098325 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vscn\" (UniqueName: \"kubernetes.io/projected/c48ec7e7-12ff-47f6-9a82-59078f7c2b04-kube-api-access-4vscn\") pod \"marketplace-operator-79b997595-2xzq6\" (UID: \"c48ec7e7-12ff-47f6-9a82-59078f7c2b04\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.260132 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.440889 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.583943 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.590391 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") pod \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.590488 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") pod \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.590529 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") pod \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\" (UID: \"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.591500 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities" (OuterVolumeSpecName: "utilities") pod "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" (UID: "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.619493 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7" (OuterVolumeSpecName: "kube-api-access-ftms7") pod "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" (UID: "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9"). InnerVolumeSpecName "kube-api-access-ftms7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.693459 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" (UID: "f0fe12da-fb7d-444b-b8d3-47e5988fb7f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694014 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") pod \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694053 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") pod \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694098 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") pod \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\" (UID: \"c7b09f99-0d13-49a0-8b8d-fc77915a171d\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694390 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftms7\" (UniqueName: \"kubernetes.io/projected/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-kube-api-access-ftms7\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694407 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.694419 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.696523 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c7b09f99-0d13-49a0-8b8d-fc77915a171d" (UID: "c7b09f99-0d13-49a0-8b8d-fc77915a171d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.704777 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c7b09f99-0d13-49a0-8b8d-fc77915a171d" (UID: "c7b09f99-0d13-49a0-8b8d-fc77915a171d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.708336 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8" (OuterVolumeSpecName: "kube-api-access-7p5b8") pod "c7b09f99-0d13-49a0-8b8d-fc77915a171d" (UID: "c7b09f99-0d13-49a0-8b8d-fc77915a171d"). InnerVolumeSpecName "kube-api-access-7p5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.801842 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p5b8\" (UniqueName: \"kubernetes.io/projected/c7b09f99-0d13-49a0-8b8d-fc77915a171d-kube-api-access-7p5b8\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.801879 4656 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.801889 4656 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c7b09f99-0d13-49a0-8b8d-fc77915a171d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.872089 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.887697 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905318 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") pod \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905366 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") pod \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905404 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") pod \"7de9fc74-9948-4e73-ac93-25f9c22189ce\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905419 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") pod \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\" (UID: \"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905439 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") pod \"7de9fc74-9948-4e73-ac93-25f9c22189ce\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.905468 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") pod \"7de9fc74-9948-4e73-ac93-25f9c22189ce\" (UID: \"7de9fc74-9948-4e73-ac93-25f9c22189ce\") " Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.906541 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities" (OuterVolumeSpecName: "utilities") pod "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" (UID: "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.906553 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities" (OuterVolumeSpecName: "utilities") pod "7de9fc74-9948-4e73-ac93-25f9c22189ce" (UID: "7de9fc74-9948-4e73-ac93-25f9c22189ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.908859 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj" (OuterVolumeSpecName: "kube-api-access-2d5jj") pod "7de9fc74-9948-4e73-ac93-25f9c22189ce" (UID: "7de9fc74-9948-4e73-ac93-25f9c22189ce"). InnerVolumeSpecName "kube-api-access-2d5jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.911307 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8" (OuterVolumeSpecName: "kube-api-access-rqjd8") pod "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" (UID: "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1"). InnerVolumeSpecName "kube-api-access-rqjd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.913671 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.949822 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xzq6"] Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.971078 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7de9fc74-9948-4e73-ac93-25f9c22189ce" (UID: "7de9fc74-9948-4e73-ac93-25f9c22189ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.977860 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" event={"ID":"c48ec7e7-12ff-47f6-9a82-59078f7c2b04","Type":"ContainerStarted","Data":"f72d4e2fef18c6315e67c1106c8d141421734775a783b48126b08563e54bb13f"} Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.989395 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" (UID: "fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993411 4656 generic.go:334] "Generic (PLEG): container finished" podID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerID="f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" exitCode=0 Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993503 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerDied","Data":"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9"} Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993549 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dc6j" event={"ID":"a6b1aae7-caaa-427d-8b07-705b02e81763","Type":"ContainerDied","Data":"4e651d4a13052ffb208c07169669a56bdcc7dc1c17ca4b751d5784fb82cafa0f"} Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993601 4656 scope.go:117] "RemoveContainer" containerID="f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" Jan 28 15:24:40 crc kubenswrapper[4656]: I0128 15:24:40.993780 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dc6j" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.007581 4656 generic.go:334] "Generic (PLEG): container finished" podID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerID="78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" exitCode=0 Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.007692 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerDied","Data":"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.007750 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhqpx" event={"ID":"fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1","Type":"ContainerDied","Data":"cf973c02c04ef3c79efcf712d2d128bb8313be52f3235da8828930c75b3c34ff"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.007851 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhqpx" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.008473 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d5jj\" (UniqueName: \"kubernetes.io/projected/7de9fc74-9948-4e73-ac93-25f9c22189ce-kube-api-access-2d5jj\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.008847 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqjd8\" (UniqueName: \"kubernetes.io/projected/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-kube-api-access-rqjd8\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.008954 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.009040 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.009143 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.009246 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7de9fc74-9948-4e73-ac93-25f9c22189ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.012548 4656 generic.go:334] "Generic (PLEG): container finished" podID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerID="10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" exitCode=0 Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.012700 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4vpf" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.013657 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerDied","Data":"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.013750 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4vpf" event={"ID":"7de9fc74-9948-4e73-ac93-25f9c22189ce","Type":"ContainerDied","Data":"cf818feb09c0ccfd9e455faaffde6799b01a2c05ed95767814d1627d35d8c054"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.021600 4656 generic.go:334] "Generic (PLEG): container finished" podID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerID="891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" exitCode=0 Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.021654 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerDied","Data":"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.021675 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzr9v" event={"ID":"f0fe12da-fb7d-444b-b8d3-47e5988fb7f9","Type":"ContainerDied","Data":"c98421a3a0c42729a0b7a3850570dd8ac89ef8e57c178261115d715189f4f351"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.021730 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzr9v" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.027777 4656 generic.go:334] "Generic (PLEG): container finished" podID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerID="38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" exitCode=0 Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.027810 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerDied","Data":"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.027832 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" event={"ID":"c7b09f99-0d13-49a0-8b8d-fc77915a171d","Type":"ContainerDied","Data":"93539b6344f80c28c33a800b6b17b3b195013261b9320da802d38ff972cee5ee"} Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.027876 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-66pz7" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.058928 4656 scope.go:117] "RemoveContainer" containerID="f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.100854 4656 scope.go:117] "RemoveContainer" containerID="98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.109712 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") pod \"a6b1aae7-caaa-427d-8b07-705b02e81763\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.109745 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") pod \"a6b1aae7-caaa-427d-8b07-705b02e81763\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.109786 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") pod \"a6b1aae7-caaa-427d-8b07-705b02e81763\" (UID: \"a6b1aae7-caaa-427d-8b07-705b02e81763\") " Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.111127 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities" (OuterVolumeSpecName: "utilities") pod "a6b1aae7-caaa-427d-8b07-705b02e81763" (UID: "a6b1aae7-caaa-427d-8b07-705b02e81763"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.121445 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8" (OuterVolumeSpecName: "kube-api-access-zkjf8") pod "a6b1aae7-caaa-427d-8b07-705b02e81763" (UID: "a6b1aae7-caaa-427d-8b07-705b02e81763"). InnerVolumeSpecName "kube-api-access-zkjf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.123999 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.127763 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gzr9v"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.137094 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.150407 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-66pz7"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.154248 4656 scope.go:117] "RemoveContainer" containerID="f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.158399 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9\": container with ID starting with f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9 not found: ID does not exist" containerID="f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.158459 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9"} err="failed to get container status \"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9\": rpc error: code = NotFound desc = could not find container \"f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9\": container with ID starting with f9ac54d0bf96e7bd5a12d2bac88b99719445a748d8dee709ba19444efa4e57c9 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.158501 4656 scope.go:117] "RemoveContainer" containerID="f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.163932 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f\": container with ID starting with f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f not found: ID does not exist" containerID="f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.163990 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f"} err="failed to get container status \"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f\": rpc error: code = NotFound desc = could not find container \"f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f\": container with ID starting with f447df190ddc1566d5dd719362328335f514cde8740650ce57943ca54f1a667f not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.164023 4656 scope.go:117] "RemoveContainer" containerID="98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.167909 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388\": container with ID starting with 98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388 not found: ID does not exist" containerID="98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.167959 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388"} err="failed to get container status \"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388\": rpc error: code = NotFound desc = could not find container \"98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388\": container with ID starting with 98c14df4f94c4722b9036644e8967a9ef8690ff3f100d44fc74d73c6dac60388 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.167987 4656 scope.go:117] "RemoveContainer" containerID="78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.200385 4656 scope.go:117] "RemoveContainer" containerID="3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.203405 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" path="/var/lib/kubelet/pods/c7b09f99-0d13-49a0-8b8d-fc77915a171d/volumes" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.204218 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" path="/var/lib/kubelet/pods/f0fe12da-fb7d-444b-b8d3-47e5988fb7f9/volumes" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.210367 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.210596 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhqpx"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.211748 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.211881 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkjf8\" (UniqueName: \"kubernetes.io/projected/a6b1aae7-caaa-427d-8b07-705b02e81763-kube-api-access-zkjf8\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.216372 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.234304 4656 scope.go:117] "RemoveContainer" containerID="daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.245363 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w4vpf"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.257816 4656 scope.go:117] "RemoveContainer" containerID="78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.259821 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41\": container with ID starting with 78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41 not found: ID does not exist" containerID="78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.259868 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41"} err="failed to get container status \"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41\": rpc error: code = NotFound desc = could not find container \"78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41\": container with ID starting with 78ffd432288a4cfc6a1e82d4405f0014e97eab284f65786fd4acf39a7ef80e41 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.259901 4656 scope.go:117] "RemoveContainer" containerID="3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.260940 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b\": container with ID starting with 3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b not found: ID does not exist" containerID="3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.260992 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b"} err="failed to get container status \"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b\": rpc error: code = NotFound desc = could not find container \"3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b\": container with ID starting with 3c3cdfb9127a525e6095faec4abc4ff5810d14580800d693687130a0c87f162b not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.261030 4656 scope.go:117] "RemoveContainer" containerID="daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.261854 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050\": container with ID starting with daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050 not found: ID does not exist" containerID="daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.261890 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050"} err="failed to get container status \"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050\": rpc error: code = NotFound desc = could not find container \"daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050\": container with ID starting with daf304bd5f9057681d8a66d23303bddcfd2bb12b9f98644bbd4ebda96fd67050 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.261933 4656 scope.go:117] "RemoveContainer" containerID="10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.284193 4656 scope.go:117] "RemoveContainer" containerID="a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.313203 4656 scope.go:117] "RemoveContainer" containerID="142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352238 4656 scope.go:117] "RemoveContainer" containerID="10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.352585 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746\": container with ID starting with 10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746 not found: ID does not exist" containerID="10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352616 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746"} err="failed to get container status \"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746\": rpc error: code = NotFound desc = could not find container \"10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746\": container with ID starting with 10402268bf6a1aef5d811da224e33b1473b4fd3e89e739cca0315b5974f14746 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352638 4656 scope.go:117] "RemoveContainer" containerID="a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.352837 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a\": container with ID starting with a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a not found: ID does not exist" containerID="a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352894 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a"} err="failed to get container status \"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a\": rpc error: code = NotFound desc = could not find container \"a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a\": container with ID starting with a53c148a59ee5a7097991c7cf52ab8089b0e45e5e38bed13c7b737181ca8a11a not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.352918 4656 scope.go:117] "RemoveContainer" containerID="142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.353141 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8\": container with ID starting with 142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8 not found: ID does not exist" containerID="142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.353190 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8"} err="failed to get container status \"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8\": rpc error: code = NotFound desc = could not find container \"142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8\": container with ID starting with 142068f065253ec5b911707d6df8038e2adfbad9b403fd1abaf70d7a2bf846c8 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.353208 4656 scope.go:117] "RemoveContainer" containerID="891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.372395 4656 scope.go:117] "RemoveContainer" containerID="3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.387470 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6b1aae7-caaa-427d-8b07-705b02e81763" (UID: "a6b1aae7-caaa-427d-8b07-705b02e81763"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.402545 4656 scope.go:117] "RemoveContainer" containerID="6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.415466 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b1aae7-caaa-427d-8b07-705b02e81763-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.421926 4656 scope.go:117] "RemoveContainer" containerID="891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.423353 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855\": container with ID starting with 891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855 not found: ID does not exist" containerID="891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.423399 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855"} err="failed to get container status \"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855\": rpc error: code = NotFound desc = could not find container \"891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855\": container with ID starting with 891b8e7ee832d2931fb7a8730fe14ba2d87ecf5c092f18f60ee9adf61d9ce855 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.423431 4656 scope.go:117] "RemoveContainer" containerID="3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.423943 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f\": container with ID starting with 3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f not found: ID does not exist" containerID="3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.423969 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f"} err="failed to get container status \"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f\": rpc error: code = NotFound desc = could not find container \"3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f\": container with ID starting with 3d0cf9566390ca7dced70a041e6af5a959413f0c51ace5e48bce0ea196016a2f not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.423987 4656 scope.go:117] "RemoveContainer" containerID="6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.424718 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a\": container with ID starting with 6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a not found: ID does not exist" containerID="6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.424752 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a"} err="failed to get container status \"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a\": rpc error: code = NotFound desc = could not find container \"6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a\": container with ID starting with 6238093bd476f33394e4f0f2a9858e7258dbf5da3fa301bbc31b874b68cebe4a not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.424773 4656 scope.go:117] "RemoveContainer" containerID="38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.441962 4656 scope.go:117] "RemoveContainer" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.456137 4656 scope.go:117] "RemoveContainer" containerID="38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.456591 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4\": container with ID starting with 38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4 not found: ID does not exist" containerID="38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.456624 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4"} err="failed to get container status \"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4\": rpc error: code = NotFound desc = could not find container \"38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4\": container with ID starting with 38ec56a69fb04ee2425a99cbe33419d448869f2242de5c68ebd3395dc5656af4 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.456653 4656 scope.go:117] "RemoveContainer" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" Jan 28 15:24:41 crc kubenswrapper[4656]: E0128 15:24:41.457168 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469\": container with ID starting with bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469 not found: ID does not exist" containerID="bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.457201 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469"} err="failed to get container status \"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469\": rpc error: code = NotFound desc = could not find container \"bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469\": container with ID starting with bc06cb4b05fba85b3cff02ac030c46d8d6d223106a8984a43c30bc2cd6e3a469 not found: ID does not exist" Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.620597 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:24:41 crc kubenswrapper[4656]: I0128 15:24:41.625386 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8dc6j"] Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.033324 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" event={"ID":"c48ec7e7-12ff-47f6-9a82-59078f7c2b04","Type":"ContainerStarted","Data":"da32adc5e2701d55c784c0a58b5abc37943b2c44d237de786fdee1decca27137"} Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.033481 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.036530 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.058210 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2xzq6" podStartSLOduration=3.058187791 podStartE2EDuration="3.058187791s" podCreationTimestamp="2026-01-28 15:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:42.0517855 +0000 UTC m=+372.559956314" watchObservedRunningTime="2026-01-28 15:24:42.058187791 +0000 UTC m=+372.566358595" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120466 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4gsg5"] Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120757 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120783 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120804 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120812 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120822 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120832 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120840 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120848 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120859 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120866 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120876 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120884 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120894 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120902 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120912 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120920 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120929 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120935 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120944 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120951 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120967 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120974 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.120987 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.120995 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="extract-content" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.121008 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121016 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="extract-utilities" Jan 28 15:24:42 crc kubenswrapper[4656]: E0128 15:24:42.121028 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121036 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121202 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121239 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121250 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fe12da-fb7d-444b-b8d3-47e5988fb7f9" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121261 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121270 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" containerName="registry-server" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.121280 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b09f99-0d13-49a0-8b8d-fc77915a171d" containerName="marketplace-operator" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.122028 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.132467 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.145787 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gsg5"] Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.229779 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-catalog-content\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.230330 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xsl\" (UniqueName: \"kubernetes.io/projected/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-kube-api-access-j7xsl\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.230435 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-utilities\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.332245 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-catalog-content\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.332339 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7xsl\" (UniqueName: \"kubernetes.io/projected/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-kube-api-access-j7xsl\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.332378 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-utilities\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.332897 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-utilities\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.333299 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-catalog-content\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.352622 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7xsl\" (UniqueName: \"kubernetes.io/projected/6c5ec616-76e4-4b6f-93ce-ca2dba833b37-kube-api-access-j7xsl\") pod \"community-operators-4gsg5\" (UID: \"6c5ec616-76e4-4b6f-93ce-ca2dba833b37\") " pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.438467 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:42 crc kubenswrapper[4656]: I0128 15:24:42.870890 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gsg5"] Jan 28 15:24:42 crc kubenswrapper[4656]: W0128 15:24:42.878024 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c5ec616_76e4_4b6f_93ce_ca2dba833b37.slice/crio-25ad831370b786636fc0bdeead5f59eaff3c46dd8afa92bf25aa6907e5ac639a WatchSource:0}: Error finding container 25ad831370b786636fc0bdeead5f59eaff3c46dd8afa92bf25aa6907e5ac639a: Status 404 returned error can't find the container with id 25ad831370b786636fc0bdeead5f59eaff3c46dd8afa92bf25aa6907e5ac639a Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.048255 4656 generic.go:334] "Generic (PLEG): container finished" podID="6c5ec616-76e4-4b6f-93ce-ca2dba833b37" containerID="31d9360eee16d936840bdbdf3c235ee6534bfc3ccba931d5ee21c5dcce82066a" exitCode=0 Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.048866 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerDied","Data":"31d9360eee16d936840bdbdf3c235ee6534bfc3ccba931d5ee21c5dcce82066a"} Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.048890 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerStarted","Data":"25ad831370b786636fc0bdeead5f59eaff3c46dd8afa92bf25aa6907e5ac639a"} Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.126842 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2b7pm"] Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.128269 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.132966 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.143751 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2b7pm"] Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.177932 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7de9fc74-9948-4e73-ac93-25f9c22189ce" path="/var/lib/kubelet/pods/7de9fc74-9948-4e73-ac93-25f9c22189ce/volumes" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.178520 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b1aae7-caaa-427d-8b07-705b02e81763" path="/var/lib/kubelet/pods/a6b1aae7-caaa-427d-8b07-705b02e81763/volumes" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.179049 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1" path="/var/lib/kubelet/pods/fa03ba88-a1f2-4e9a-b0d7-e7d84c6f3cf1/volumes" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.260392 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xgqt\" (UniqueName: \"kubernetes.io/projected/816a03ab-31e5-4d9a-b66c-3787ac9335a9-kube-api-access-9xgqt\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.260635 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-utilities\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.260804 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-catalog-content\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.362501 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xgqt\" (UniqueName: \"kubernetes.io/projected/816a03ab-31e5-4d9a-b66c-3787ac9335a9-kube-api-access-9xgqt\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.362587 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-utilities\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.362621 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-catalog-content\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.363081 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-utilities\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.363148 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/816a03ab-31e5-4d9a-b66c-3787ac9335a9-catalog-content\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.384907 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xgqt\" (UniqueName: \"kubernetes.io/projected/816a03ab-31e5-4d9a-b66c-3787ac9335a9-kube-api-access-9xgqt\") pod \"redhat-marketplace-2b7pm\" (UID: \"816a03ab-31e5-4d9a-b66c-3787ac9335a9\") " pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.443092 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.728986 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-pml8w" Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.793560 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:24:43 crc kubenswrapper[4656]: I0128 15:24:43.877625 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2b7pm"] Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.055078 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerStarted","Data":"231f7f8aa55dbc91bccac98c0157e24f18349d8c2c0ca6b62358ce884d984dc3"} Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.057313 4656 generic.go:334] "Generic (PLEG): container finished" podID="816a03ab-31e5-4d9a-b66c-3787ac9335a9" containerID="859f6d41609f7f1bc4d4a0255c122a88748ed8e069d287aca7332cbc724fe2f3" exitCode=0 Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.058526 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2b7pm" event={"ID":"816a03ab-31e5-4d9a-b66c-3787ac9335a9","Type":"ContainerDied","Data":"859f6d41609f7f1bc4d4a0255c122a88748ed8e069d287aca7332cbc724fe2f3"} Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.058554 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2b7pm" event={"ID":"816a03ab-31e5-4d9a-b66c-3787ac9335a9","Type":"ContainerStarted","Data":"a89c3890ee1f2e08ca9f5dda5ec4062471d4aaa5528044e221f9a099ad0b9520"} Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.721326 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.722823 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.725549 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.740235 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.885188 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.885602 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.885826 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.993708 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.994071 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.994237 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.994666 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:44 crc kubenswrapper[4656]: I0128 15:24:44.994794 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.013814 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") pod \"redhat-operators-gbjhq\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.038452 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.064786 4656 generic.go:334] "Generic (PLEG): container finished" podID="6c5ec616-76e4-4b6f-93ce-ca2dba833b37" containerID="231f7f8aa55dbc91bccac98c0157e24f18349d8c2c0ca6b62358ce884d984dc3" exitCode=0 Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.064843 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerDied","Data":"231f7f8aa55dbc91bccac98c0157e24f18349d8c2c0ca6b62358ce884d984dc3"} Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.449863 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:24:45 crc kubenswrapper[4656]: E0128 15:24:45.661662 4656 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea7644c9_f50c_43f8_8165_3fa375c3b9c0.slice/crio-conmon-df16b6a6ba81e2b7746a8de266aea33bfee9014a3f26a0a2bc636c8468b4da77.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.727242 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9zl7"] Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.728886 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.734639 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9zl7"] Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.734863 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.806501 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9trsc\" (UniqueName: \"kubernetes.io/projected/769c7d2c-1d96-4056-9165-ebf9a1cefc45-kube-api-access-9trsc\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.806550 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-utilities\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.806572 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-catalog-content\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.908316 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9trsc\" (UniqueName: \"kubernetes.io/projected/769c7d2c-1d96-4056-9165-ebf9a1cefc45-kube-api-access-9trsc\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.908368 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-utilities\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.908388 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-catalog-content\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.908847 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-catalog-content\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.909723 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/769c7d2c-1d96-4056-9165-ebf9a1cefc45-utilities\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:45 crc kubenswrapper[4656]: I0128 15:24:45.927135 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9trsc\" (UniqueName: \"kubernetes.io/projected/769c7d2c-1d96-4056-9165-ebf9a1cefc45-kube-api-access-9trsc\") pod \"certified-operators-l9zl7\" (UID: \"769c7d2c-1d96-4056-9165-ebf9a1cefc45\") " pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.048037 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.071649 4656 generic.go:334] "Generic (PLEG): container finished" podID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerID="df16b6a6ba81e2b7746a8de266aea33bfee9014a3f26a0a2bc636c8468b4da77" exitCode=0 Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.072000 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerDied","Data":"df16b6a6ba81e2b7746a8de266aea33bfee9014a3f26a0a2bc636c8468b4da77"} Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.072054 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerStarted","Data":"2dd598bb76a09812057b8dc1896e04e054099bec7b15c382202025f6bc1dcb53"} Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.080660 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gsg5" event={"ID":"6c5ec616-76e4-4b6f-93ce-ca2dba833b37","Type":"ContainerStarted","Data":"4839ca45df2dfba34812412f8dc1889ac975c0e2b1dc2af364d6d46adfb09dda"} Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.083830 4656 generic.go:334] "Generic (PLEG): container finished" podID="816a03ab-31e5-4d9a-b66c-3787ac9335a9" containerID="b34bc4f99fbdbfc21e766ac0ed0908bd906792e88b39e74042bf0ebc3a8d2185" exitCode=0 Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.083871 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2b7pm" event={"ID":"816a03ab-31e5-4d9a-b66c-3787ac9335a9","Type":"ContainerDied","Data":"b34bc4f99fbdbfc21e766ac0ed0908bd906792e88b39e74042bf0ebc3a8d2185"} Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.120310 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4gsg5" podStartSLOduration=1.634000815 podStartE2EDuration="4.120284698s" podCreationTimestamp="2026-01-28 15:24:42 +0000 UTC" firstStartedPulling="2026-01-28 15:24:43.050798521 +0000 UTC m=+373.558969325" lastFinishedPulling="2026-01-28 15:24:45.537082404 +0000 UTC m=+376.045253208" observedRunningTime="2026-01-28 15:24:46.117502209 +0000 UTC m=+376.625673023" watchObservedRunningTime="2026-01-28 15:24:46.120284698 +0000 UTC m=+376.628455502" Jan 28 15:24:46 crc kubenswrapper[4656]: I0128 15:24:46.598255 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9zl7"] Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.094297 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2b7pm" event={"ID":"816a03ab-31e5-4d9a-b66c-3787ac9335a9","Type":"ContainerStarted","Data":"983b27a00634327ed787b3afb9d077c285d644ffb6772c6c1133029a5b1be0ea"} Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.097341 4656 generic.go:334] "Generic (PLEG): container finished" podID="769c7d2c-1d96-4056-9165-ebf9a1cefc45" containerID="4e8fbb9400031706ec2a7786536678789e04a336c943221496437db3199921c5" exitCode=0 Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.098589 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerDied","Data":"4e8fbb9400031706ec2a7786536678789e04a336c943221496437db3199921c5"} Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.098625 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerStarted","Data":"9aeffcfa6316b99b0439b329440f64e8e84e2f6c230abd816febe3be888893f1"} Jan 28 15:24:47 crc kubenswrapper[4656]: I0128 15:24:47.126006 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2b7pm" podStartSLOduration=1.627282456 podStartE2EDuration="4.125974749s" podCreationTimestamp="2026-01-28 15:24:43 +0000 UTC" firstStartedPulling="2026-01-28 15:24:44.059055925 +0000 UTC m=+374.567226729" lastFinishedPulling="2026-01-28 15:24:46.557748218 +0000 UTC m=+377.065919022" observedRunningTime="2026-01-28 15:24:47.121555224 +0000 UTC m=+377.629726038" watchObservedRunningTime="2026-01-28 15:24:47.125974749 +0000 UTC m=+377.634145543" Jan 28 15:24:48 crc kubenswrapper[4656]: I0128 15:24:48.109827 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerStarted","Data":"0447c1deb99eafea9e18abfe4ecf25bab580127d202a734a78186a7343a1aa00"} Jan 28 15:24:48 crc kubenswrapper[4656]: I0128 15:24:48.113911 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerStarted","Data":"98395829abb715f8be4df3366fdfa04876abca34d58e9c479a1d7e699b8ca848"} Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.121402 4656 generic.go:334] "Generic (PLEG): container finished" podID="769c7d2c-1d96-4056-9165-ebf9a1cefc45" containerID="0447c1deb99eafea9e18abfe4ecf25bab580127d202a734a78186a7343a1aa00" exitCode=0 Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.121679 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerDied","Data":"0447c1deb99eafea9e18abfe4ecf25bab580127d202a734a78186a7343a1aa00"} Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.127923 4656 generic.go:334] "Generic (PLEG): container finished" podID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerID="98395829abb715f8be4df3366fdfa04876abca34d58e9c479a1d7e699b8ca848" exitCode=0 Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.127965 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerDied","Data":"98395829abb715f8be4df3366fdfa04876abca34d58e9c479a1d7e699b8ca848"} Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.127994 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerStarted","Data":"7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a"} Jan 28 15:24:49 crc kubenswrapper[4656]: I0128 15:24:49.162538 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gbjhq" podStartSLOduration=2.4401473989999998 podStartE2EDuration="5.162416841s" podCreationTimestamp="2026-01-28 15:24:44 +0000 UTC" firstStartedPulling="2026-01-28 15:24:46.076324493 +0000 UTC m=+376.584495297" lastFinishedPulling="2026-01-28 15:24:48.798593925 +0000 UTC m=+379.306764739" observedRunningTime="2026-01-28 15:24:49.159848489 +0000 UTC m=+379.668019313" watchObservedRunningTime="2026-01-28 15:24:49.162416841 +0000 UTC m=+379.670587645" Jan 28 15:24:50 crc kubenswrapper[4656]: I0128 15:24:50.138218 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9zl7" event={"ID":"769c7d2c-1d96-4056-9165-ebf9a1cefc45","Type":"ContainerStarted","Data":"1c335d1f061f34b0509ecb7815ca45aaea1789e9f989b13d691a9e29d502d25d"} Jan 28 15:24:50 crc kubenswrapper[4656]: I0128 15:24:50.160984 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9zl7" podStartSLOduration=2.619369293 podStartE2EDuration="5.16096349s" podCreationTimestamp="2026-01-28 15:24:45 +0000 UTC" firstStartedPulling="2026-01-28 15:24:47.100621132 +0000 UTC m=+377.608791936" lastFinishedPulling="2026-01-28 15:24:49.642215299 +0000 UTC m=+380.150386133" observedRunningTime="2026-01-28 15:24:50.158078158 +0000 UTC m=+380.666248962" watchObservedRunningTime="2026-01-28 15:24:50.16096349 +0000 UTC m=+380.669134294" Jan 28 15:24:52 crc kubenswrapper[4656]: I0128 15:24:52.439805 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:52 crc kubenswrapper[4656]: I0128 15:24:52.440330 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:52 crc kubenswrapper[4656]: I0128 15:24:52.480374 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:53 crc kubenswrapper[4656]: I0128 15:24:53.193882 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gsg5" Jan 28 15:24:53 crc kubenswrapper[4656]: I0128 15:24:53.443510 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:53 crc kubenswrapper[4656]: I0128 15:24:53.444720 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:53 crc kubenswrapper[4656]: I0128 15:24:53.480077 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:54 crc kubenswrapper[4656]: I0128 15:24:54.198670 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2b7pm" Jan 28 15:24:55 crc kubenswrapper[4656]: I0128 15:24:55.039353 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:55 crc kubenswrapper[4656]: I0128 15:24:55.039417 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:55 crc kubenswrapper[4656]: I0128 15:24:55.082259 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:55 crc kubenswrapper[4656]: I0128 15:24:55.199206 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:24:56 crc kubenswrapper[4656]: I0128 15:24:56.048811 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:56 crc kubenswrapper[4656]: I0128 15:24:56.049176 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:56 crc kubenswrapper[4656]: I0128 15:24:56.098360 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:24:56 crc kubenswrapper[4656]: I0128 15:24:56.236068 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9zl7" Jan 28 15:25:08 crc kubenswrapper[4656]: I0128 15:25:08.855091 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerName="registry" containerID="cri-o://e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967" gracePeriod=30 Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.270865 4656 generic.go:334] "Generic (PLEG): container finished" podID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerID="e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967" exitCode=0 Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.270945 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" event={"ID":"5823f5c7-fabe-4d4b-a3df-49349749b19e","Type":"ContainerDied","Data":"e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967"} Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.335892 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416246 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416293 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416330 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416384 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416552 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416598 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416870 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.416895 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") pod \"5823f5c7-fabe-4d4b-a3df-49349749b19e\" (UID: \"5823f5c7-fabe-4d4b-a3df-49349749b19e\") " Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.418458 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.425763 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm" (OuterVolumeSpecName: "kube-api-access-bt6gm") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "kube-api-access-bt6gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.427812 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.429671 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.430044 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.434574 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.435578 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.445044 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5823f5c7-fabe-4d4b-a3df-49349749b19e" (UID: "5823f5c7-fabe-4d4b-a3df-49349749b19e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518738 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt6gm\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-kube-api-access-bt6gm\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518776 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518789 4656 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823f5c7-fabe-4d4b-a3df-49349749b19e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518800 4656 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518812 4656 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823f5c7-fabe-4d4b-a3df-49349749b19e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518821 4656 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4656]: I0128 15:25:09.518834 4656 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823f5c7-fabe-4d4b-a3df-49349749b19e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.281626 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" event={"ID":"5823f5c7-fabe-4d4b-a3df-49349749b19e","Type":"ContainerDied","Data":"54b236edb7b566b5bae8f9b4e93d4a4d144dfcf0ffa1b6e2bf3e66d3161ef327"} Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.281833 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5r48x" Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.282044 4656 scope.go:117] "RemoveContainer" containerID="e21bd5b2594163e4a314e6e9e388228b463a2c6dc9baa4a831b207ddedbec967" Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.343532 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:25:10 crc kubenswrapper[4656]: I0128 15:25:10.351850 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5r48x"] Jan 28 15:25:11 crc kubenswrapper[4656]: I0128 15:25:11.182390 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" path="/var/lib/kubelet/pods/5823f5c7-fabe-4d4b-a3df-49349749b19e/volumes" Jan 28 15:25:11 crc kubenswrapper[4656]: I0128 15:25:11.264399 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:25:11 crc kubenswrapper[4656]: I0128 15:25:11.264494 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:25:41 crc kubenswrapper[4656]: I0128 15:25:41.264085 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:25:41 crc kubenswrapper[4656]: I0128 15:25:41.264612 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.264513 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.266012 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.266211 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.266938 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.267082 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d" gracePeriod=600 Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.708016 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d" exitCode=0 Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.708092 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d"} Jan 28 15:26:11 crc kubenswrapper[4656]: I0128 15:26:11.708434 4656 scope.go:117] "RemoveContainer" containerID="a9e719f178b739bc9185f745896b6dadb1b0137e503aecb96aee7fb5c71989f1" Jan 28 15:26:12 crc kubenswrapper[4656]: I0128 15:26:12.717646 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1"} Jan 28 15:28:41 crc kubenswrapper[4656]: I0128 15:28:41.264864 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:28:41 crc kubenswrapper[4656]: I0128 15:28:41.265606 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:29:11 crc kubenswrapper[4656]: I0128 15:29:11.264545 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:29:11 crc kubenswrapper[4656]: I0128 15:29:11.265092 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.265352 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.266093 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.266197 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.266839 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:29:41 crc kubenswrapper[4656]: I0128 15:29:41.266896 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1" gracePeriod=600 Jan 28 15:29:42 crc kubenswrapper[4656]: I0128 15:29:42.092378 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1" exitCode=0 Jan 28 15:29:42 crc kubenswrapper[4656]: I0128 15:29:42.092437 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1"} Jan 28 15:29:42 crc kubenswrapper[4656]: I0128 15:29:42.092727 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0"} Jan 28 15:29:42 crc kubenswrapper[4656]: I0128 15:29:42.092786 4656 scope.go:117] "RemoveContainer" containerID="d18f94cea4f3c54ba99c855b801d8b744d7657dab8312dfc4b6351d91d1b429d" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.250052 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 15:30:00 crc kubenswrapper[4656]: E0128 15:30:00.251610 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerName="registry" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.251640 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerName="registry" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.251844 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5823f5c7-fabe-4d4b-a3df-49349749b19e" containerName="registry" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.252469 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.255201 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.255389 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.268644 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.310750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.310837 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.311089 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.412668 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.412991 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.413084 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.414050 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.419973 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.430469 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") pod \"collect-profiles-29493570-d2j9n\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.568450 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:00 crc kubenswrapper[4656]: I0128 15:30:00.890219 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 15:30:01 crc kubenswrapper[4656]: I0128 15:30:01.216731 4656 generic.go:334] "Generic (PLEG): container finished" podID="647c2e48-5d47-46f5-bd41-1512da5aef27" containerID="e478d6329d394a8ee6946b81a0192102dc331c4b0fad2b32a9549e2af991b8fd" exitCode=0 Jan 28 15:30:01 crc kubenswrapper[4656]: I0128 15:30:01.216787 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" event={"ID":"647c2e48-5d47-46f5-bd41-1512da5aef27","Type":"ContainerDied","Data":"e478d6329d394a8ee6946b81a0192102dc331c4b0fad2b32a9549e2af991b8fd"} Jan 28 15:30:01 crc kubenswrapper[4656]: I0128 15:30:01.216857 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" event={"ID":"647c2e48-5d47-46f5-bd41-1512da5aef27","Type":"ContainerStarted","Data":"6e1794ee00e769b0083dd072b9a68daa9637d06cf8f13a30e84b9a62689bf43f"} Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.424466 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.565500 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") pod \"647c2e48-5d47-46f5-bd41-1512da5aef27\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.565860 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") pod \"647c2e48-5d47-46f5-bd41-1512da5aef27\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.565953 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") pod \"647c2e48-5d47-46f5-bd41-1512da5aef27\" (UID: \"647c2e48-5d47-46f5-bd41-1512da5aef27\") " Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.566744 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume" (OuterVolumeSpecName: "config-volume") pod "647c2e48-5d47-46f5-bd41-1512da5aef27" (UID: "647c2e48-5d47-46f5-bd41-1512da5aef27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.570806 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w" (OuterVolumeSpecName: "kube-api-access-rfc9w") pod "647c2e48-5d47-46f5-bd41-1512da5aef27" (UID: "647c2e48-5d47-46f5-bd41-1512da5aef27"). InnerVolumeSpecName "kube-api-access-rfc9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.571553 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "647c2e48-5d47-46f5-bd41-1512da5aef27" (UID: "647c2e48-5d47-46f5-bd41-1512da5aef27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.668101 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/647c2e48-5d47-46f5-bd41-1512da5aef27-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.668153 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfc9w\" (UniqueName: \"kubernetes.io/projected/647c2e48-5d47-46f5-bd41-1512da5aef27-kube-api-access-rfc9w\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:02 crc kubenswrapper[4656]: I0128 15:30:02.668202 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/647c2e48-5d47-46f5-bd41-1512da5aef27-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:03 crc kubenswrapper[4656]: I0128 15:30:03.233550 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" event={"ID":"647c2e48-5d47-46f5-bd41-1512da5aef27","Type":"ContainerDied","Data":"6e1794ee00e769b0083dd072b9a68daa9637d06cf8f13a30e84b9a62689bf43f"} Jan 28 15:30:03 crc kubenswrapper[4656]: I0128 15:30:03.233610 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1794ee00e769b0083dd072b9a68daa9637d06cf8f13a30e84b9a62689bf43f" Jan 28 15:30:03 crc kubenswrapper[4656]: I0128 15:30:03.233619 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.234991 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j"] Jan 28 15:30:26 crc kubenswrapper[4656]: E0128 15:30:26.235782 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647c2e48-5d47-46f5-bd41-1512da5aef27" containerName="collect-profiles" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.235796 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="647c2e48-5d47-46f5-bd41-1512da5aef27" containerName="collect-profiles" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.235934 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="647c2e48-5d47-46f5-bd41-1512da5aef27" containerName="collect-profiles" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.236664 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.240063 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.240277 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.246920 4656 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hk547" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.256060 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-cpqlr"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.256971 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.260667 4656 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-t9z5x" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.271176 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jl8hn"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.272097 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.274063 4656 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wldpb" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.275231 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.279304 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cpqlr"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.288237 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jl8hn"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.433584 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh48s\" (UniqueName: \"kubernetes.io/projected/2afeb8d3-6acc-42ee-aa3d-943c0784354c-kube-api-access-rh48s\") pod \"cert-manager-webhook-687f57d79b-jl8hn\" (UID: \"2afeb8d3-6acc-42ee-aa3d-943c0784354c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.433952 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvfnz\" (UniqueName: \"kubernetes.io/projected/b66b5cb8-91c8-4122-b61a-d2f5f7815d26-kube-api-access-lvfnz\") pod \"cert-manager-858654f9db-cpqlr\" (UID: \"b66b5cb8-91c8-4122-b61a-d2f5f7815d26\") " pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.434050 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4ccb\" (UniqueName: \"kubernetes.io/projected/46693ecf-5a40-4182-aeed-7161923e4016-kube-api-access-m4ccb\") pod \"cert-manager-cainjector-cf98fcc89-8bd5j\" (UID: \"46693ecf-5a40-4182-aeed-7161923e4016\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.536927 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh48s\" (UniqueName: \"kubernetes.io/projected/2afeb8d3-6acc-42ee-aa3d-943c0784354c-kube-api-access-rh48s\") pod \"cert-manager-webhook-687f57d79b-jl8hn\" (UID: \"2afeb8d3-6acc-42ee-aa3d-943c0784354c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.537032 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvfnz\" (UniqueName: \"kubernetes.io/projected/b66b5cb8-91c8-4122-b61a-d2f5f7815d26-kube-api-access-lvfnz\") pod \"cert-manager-858654f9db-cpqlr\" (UID: \"b66b5cb8-91c8-4122-b61a-d2f5f7815d26\") " pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.537067 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4ccb\" (UniqueName: \"kubernetes.io/projected/46693ecf-5a40-4182-aeed-7161923e4016-kube-api-access-m4ccb\") pod \"cert-manager-cainjector-cf98fcc89-8bd5j\" (UID: \"46693ecf-5a40-4182-aeed-7161923e4016\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.569768 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4ccb\" (UniqueName: \"kubernetes.io/projected/46693ecf-5a40-4182-aeed-7161923e4016-kube-api-access-m4ccb\") pod \"cert-manager-cainjector-cf98fcc89-8bd5j\" (UID: \"46693ecf-5a40-4182-aeed-7161923e4016\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.571057 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh48s\" (UniqueName: \"kubernetes.io/projected/2afeb8d3-6acc-42ee-aa3d-943c0784354c-kube-api-access-rh48s\") pod \"cert-manager-webhook-687f57d79b-jl8hn\" (UID: \"2afeb8d3-6acc-42ee-aa3d-943c0784354c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.581690 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvfnz\" (UniqueName: \"kubernetes.io/projected/b66b5cb8-91c8-4122-b61a-d2f5f7815d26-kube-api-access-lvfnz\") pod \"cert-manager-858654f9db-cpqlr\" (UID: \"b66b5cb8-91c8-4122-b61a-d2f5f7815d26\") " pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.585841 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.793204 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jl8hn"] Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.806511 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.852623 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" Jan 28 15:30:26 crc kubenswrapper[4656]: I0128 15:30:26.876341 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cpqlr" Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.102187 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cpqlr"] Jan 28 15:30:27 crc kubenswrapper[4656]: W0128 15:30:27.105824 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb66b5cb8_91c8_4122_b61a_d2f5f7815d26.slice/crio-dd857eabfe083949997ea5d9d04ffbc4bc706cbcdbf93c82ee5ffe2cc47518f5 WatchSource:0}: Error finding container dd857eabfe083949997ea5d9d04ffbc4bc706cbcdbf93c82ee5ffe2cc47518f5: Status 404 returned error can't find the container with id dd857eabfe083949997ea5d9d04ffbc4bc706cbcdbf93c82ee5ffe2cc47518f5 Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.244082 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j"] Jan 28 15:30:27 crc kubenswrapper[4656]: W0128 15:30:27.247938 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46693ecf_5a40_4182_aeed_7161923e4016.slice/crio-08a4b62589684b3764525eef92b05232d6f86ab755dbbe409e015a45599208fa WatchSource:0}: Error finding container 08a4b62589684b3764525eef92b05232d6f86ab755dbbe409e015a45599208fa: Status 404 returned error can't find the container with id 08a4b62589684b3764525eef92b05232d6f86ab755dbbe409e015a45599208fa Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.365623 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" event={"ID":"2afeb8d3-6acc-42ee-aa3d-943c0784354c","Type":"ContainerStarted","Data":"4d3e29bd150054b57a1bd17a2ca53bb608c4260029a7226165461fd586af80a7"} Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.366985 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" event={"ID":"46693ecf-5a40-4182-aeed-7161923e4016","Type":"ContainerStarted","Data":"08a4b62589684b3764525eef92b05232d6f86ab755dbbe409e015a45599208fa"} Jan 28 15:30:27 crc kubenswrapper[4656]: I0128 15:30:27.374191 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cpqlr" event={"ID":"b66b5cb8-91c8-4122-b61a-d2f5f7815d26","Type":"ContainerStarted","Data":"dd857eabfe083949997ea5d9d04ffbc4bc706cbcdbf93c82ee5ffe2cc47518f5"} Jan 28 15:30:31 crc kubenswrapper[4656]: I0128 15:30:31.406381 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" event={"ID":"2afeb8d3-6acc-42ee-aa3d-943c0784354c","Type":"ContainerStarted","Data":"957633209ab27696b4ffd0da4ec1a922b020e7182324be3d83b10cd883f55544"} Jan 28 15:30:31 crc kubenswrapper[4656]: I0128 15:30:31.406846 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:31 crc kubenswrapper[4656]: I0128 15:30:31.425974 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" podStartSLOduration=1.796957247 podStartE2EDuration="5.425941018s" podCreationTimestamp="2026-01-28 15:30:26 +0000 UTC" firstStartedPulling="2026-01-28 15:30:26.805997714 +0000 UTC m=+717.314168518" lastFinishedPulling="2026-01-28 15:30:30.434981485 +0000 UTC m=+720.943152289" observedRunningTime="2026-01-28 15:30:31.422358649 +0000 UTC m=+721.930529453" watchObservedRunningTime="2026-01-28 15:30:31.425941018 +0000 UTC m=+721.934111812" Jan 28 15:30:34 crc kubenswrapper[4656]: I0128 15:30:34.424812 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cpqlr" event={"ID":"b66b5cb8-91c8-4122-b61a-d2f5f7815d26","Type":"ContainerStarted","Data":"627cec4c46574dd778de34dedc17b66a7ca1b216d70d028aa2742bf86c25a020"} Jan 28 15:30:34 crc kubenswrapper[4656]: I0128 15:30:34.426050 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" event={"ID":"46693ecf-5a40-4182-aeed-7161923e4016","Type":"ContainerStarted","Data":"9fadf63e9cb3e630f28dcf28c87eb7218dbc2a851129949abcc79aba8394f5d9"} Jan 28 15:30:34 crc kubenswrapper[4656]: I0128 15:30:34.439138 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-cpqlr" podStartSLOduration=2.025193785 podStartE2EDuration="8.439117857s" podCreationTimestamp="2026-01-28 15:30:26 +0000 UTC" firstStartedPulling="2026-01-28 15:30:27.108596494 +0000 UTC m=+717.616767298" lastFinishedPulling="2026-01-28 15:30:33.522520576 +0000 UTC m=+724.030691370" observedRunningTime="2026-01-28 15:30:34.43778334 +0000 UTC m=+724.945954164" watchObservedRunningTime="2026-01-28 15:30:34.439117857 +0000 UTC m=+724.947288661" Jan 28 15:30:34 crc kubenswrapper[4656]: I0128 15:30:34.461491 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8bd5j" podStartSLOduration=2.183081111 podStartE2EDuration="8.461469864s" podCreationTimestamp="2026-01-28 15:30:26 +0000 UTC" firstStartedPulling="2026-01-28 15:30:27.250289463 +0000 UTC m=+717.758460267" lastFinishedPulling="2026-01-28 15:30:33.528678216 +0000 UTC m=+724.036849020" observedRunningTime="2026-01-28 15:30:34.455977092 +0000 UTC m=+724.964147906" watchObservedRunningTime="2026-01-28 15:30:34.461469864 +0000 UTC m=+724.969640668" Jan 28 15:30:36 crc kubenswrapper[4656]: I0128 15:30:36.589659 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-jl8hn" Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.944305 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kwnzt"] Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945291 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-controller" containerID="cri-o://f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945332 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="sbdb" containerID="cri-o://be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945436 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-node" containerID="cri-o://5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945457 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-acl-logging" containerID="cri-o://c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945499 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945551 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="northd" containerID="cri-o://8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" gracePeriod=30 Jan 28 15:30:48 crc kubenswrapper[4656]: I0128 15:30:48.945331 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="nbdb" containerID="cri-o://8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" gracePeriod=30 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.014791 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" containerID="cri-o://318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" gracePeriod=30 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.401630 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.403866 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovn-acl-logging/0.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.404382 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovn-controller/0.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.404834 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477358 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-th9gj"] Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477717 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-node" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477742 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-node" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477770 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477778 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477791 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477799 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477809 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477818 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477831 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="nbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477838 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="nbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477849 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477856 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477887 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477895 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477903 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="sbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477911 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="sbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477921 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477930 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477943 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="northd" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477950 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="northd" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477961 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-acl-logging" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477968 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-acl-logging" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.477978 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kubecfg-setup" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.477985 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kubecfg-setup" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478141 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="sbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478199 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478214 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478223 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="northd" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478233 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovn-acl-logging" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478244 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478254 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478262 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478272 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478280 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="nbdb" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478288 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="kube-rbac-proxy-node" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.478399 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478409 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.478519 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" containerName="ovnkube-controller" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.480409 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553593 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553702 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553749 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553816 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553851 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553840 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553884 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.553937 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash" (OuterVolumeSpecName: "host-slash") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554038 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554088 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554123 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554153 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554224 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554265 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554304 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554326 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554369 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554394 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554427 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554452 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554483 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554512 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") pod \"5748c84b-daec-4bf0-bda9-180d379ab075\" (UID: \"5748c84b-daec-4bf0-bda9-180d379ab075\") " Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554843 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554883 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554926 4656 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554943 4656 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554981 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.554991 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555055 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555054 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log" (OuterVolumeSpecName: "node-log") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555080 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555095 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket" (OuterVolumeSpecName: "log-socket") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555098 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555140 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555146 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555196 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555202 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555388 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.555424 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.559811 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2" (OuterVolumeSpecName: "kube-api-access-68qp2") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "kube-api-access-68qp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.560660 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.566120 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/2.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.566714 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/1.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.566799 4656 generic.go:334] "Generic (PLEG): container finished" podID="7662a84d-d9cb-4684-b76f-c63ffeff8344" containerID="34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd" exitCode=2 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.566870 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerDied","Data":"34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.567357 4656 scope.go:117] "RemoveContainer" containerID="c2a750cbb6ceaa1889263f277b489ae3b92336e27c8e979f65558cbaf0084638" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.568111 4656 scope.go:117] "RemoveContainer" containerID="34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.568431 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rpzjg_openshift-multus(7662a84d-d9cb-4684-b76f-c63ffeff8344)\"" pod="openshift-multus/multus-rpzjg" podUID="7662a84d-d9cb-4684-b76f-c63ffeff8344" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.570244 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovnkube-controller/3.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.573945 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovn-acl-logging/0.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.574568 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kwnzt_5748c84b-daec-4bf0-bda9-180d379ab075/ovn-controller/0.log" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.574900 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575001 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575075 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575145 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575229 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575297 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" exitCode=0 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575370 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" exitCode=143 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575466 4656 generic.go:334] "Generic (PLEG): container finished" podID="5748c84b-daec-4bf0-bda9-180d379ab075" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" exitCode=143 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575526 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.575563 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576347 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576413 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576478 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576542 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576596 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576671 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576741 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576799 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576845 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576893 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576945 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.576992 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577037 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577177 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577234 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577290 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577345 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577395 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577439 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577487 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577544 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577591 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577638 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577683 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577730 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577805 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577894 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.577979 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578055 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578122 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578220 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578281 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578354 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578407 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578472 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578550 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578608 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578686 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kwnzt" event={"ID":"5748c84b-daec-4bf0-bda9-180d379ab075","Type":"ContainerDied","Data":"99f3f58926f9f5145244dbe6e9acfd081f57a6d5e67d0fa71fb1124101e0bee2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578763 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578837 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578918 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578992 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579063 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579130 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579206 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579263 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579319 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.579363 4656 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.578474 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5748c84b-daec-4bf0-bda9-180d379ab075" (UID: "5748c84b-daec-4bf0-bda9-180d379ab075"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.617906 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.633004 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.648234 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656820 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-ovn\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656884 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656908 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656928 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-script-lib\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656950 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-var-lib-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.656967 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-env-overrides\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657412 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-kubelet\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657463 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-log-socket\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657481 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-bin\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657497 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-netns\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657807 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-slash\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657923 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x64t\" (UniqueName: \"kubernetes.io/projected/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-kube-api-access-8x64t\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.657988 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-systemd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658033 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-config\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658057 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-node-log\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658111 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658141 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-systemd-units\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658190 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658232 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-etc-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658252 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-netd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658336 4656 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658360 4656 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658373 4656 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658383 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658392 4656 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658401 4656 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658410 4656 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658419 4656 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748c84b-daec-4bf0-bda9-180d379ab075-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658427 4656 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658435 4656 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658445 4656 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658464 4656 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658475 4656 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658483 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68qp2\" (UniqueName: \"kubernetes.io/projected/5748c84b-daec-4bf0-bda9-180d379ab075-kube-api-access-68qp2\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658492 4656 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658501 4656 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658510 4656 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748c84b-daec-4bf0-bda9-180d379ab075-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.658519 4656 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748c84b-daec-4bf0-bda9-180d379ab075-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.663452 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.681216 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.697276 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.709891 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.722916 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.736267 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.749790 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.758917 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-slash\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.758968 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x64t\" (UniqueName: \"kubernetes.io/projected/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-kube-api-access-8x64t\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.758996 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-systemd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759023 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-config\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759045 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-node-log\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759076 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759078 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-slash\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759086 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-systemd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759126 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-systemd-units\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759097 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-systemd-units\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759133 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-node-log\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.759769 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-config\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760313 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760364 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-etc-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760389 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-netd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760413 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-ovn\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760427 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-etc-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760445 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-netd\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760472 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-ovn\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760411 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760481 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760438 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-run-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760549 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760586 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-script-lib\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760613 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-var-lib-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760699 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-env-overrides\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760732 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-kubelet\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760781 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-log-socket\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760821 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-bin\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760848 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-netns\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.760978 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-netns\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761014 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-run-ovn-kubernetes\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761258 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-kubelet\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761343 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-var-lib-openvswitch\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761392 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-host-cni-bin\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761643 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovnkube-script-lib\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.761907 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-log-socket\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.762628 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-env-overrides\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.762856 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-ovn-node-metrics-cert\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.767140 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.767762 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.767799 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} err="failed to get container status \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.767828 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.768348 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.768369 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} err="failed to get container status \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.768382 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.768639 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.768658 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} err="failed to get container status \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.768674 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.769101 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.769139 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} err="failed to get container status \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.769214 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.769718 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.769744 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} err="failed to get container status \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.769764 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.770146 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770185 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} err="failed to get container status \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770198 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.770504 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770519 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} err="failed to get container status \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770532 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.770752 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770770 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} err="failed to get container status \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.770782 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.771079 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771096 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} err="failed to get container status \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771111 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: E0128 15:30:49.771452 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771473 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} err="failed to get container status \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771485 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771709 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} err="failed to get container status \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.771725 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772064 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} err="failed to get container status \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772082 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772338 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} err="failed to get container status \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772365 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772670 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} err="failed to get container status \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772695 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772892 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} err="failed to get container status \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.772907 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.773108 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} err="failed to get container status \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.773127 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.775339 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} err="failed to get container status \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.775372 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.775828 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} err="failed to get container status \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.775887 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.776253 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} err="failed to get container status \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.776281 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.776577 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} err="failed to get container status \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.776603 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.777393 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x64t\" (UniqueName: \"kubernetes.io/projected/288efb1a-43ee-454e-8e5b-9a54bf7ceb3e-kube-api-access-8x64t\") pod \"ovnkube-node-th9gj\" (UID: \"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e\") " pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.777821 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} err="failed to get container status \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.777848 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778186 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} err="failed to get container status \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778224 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778515 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} err="failed to get container status \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778542 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778843 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} err="failed to get container status \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.778869 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779247 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} err="failed to get container status \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779267 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779543 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} err="failed to get container status \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779568 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779823 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} err="failed to get container status \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.779847 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.780334 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} err="failed to get container status \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.780359 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.780634 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} err="failed to get container status \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.780662 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781275 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} err="failed to get container status \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781303 4656 scope.go:117] "RemoveContainer" containerID="318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781615 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9"} err="failed to get container status \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": rpc error: code = NotFound desc = could not find container \"318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9\": container with ID starting with 318da837b11797115df5850221642f044c9460fca4f9202aaf2217654ecb16f9 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781640 4656 scope.go:117] "RemoveContainer" containerID="98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781850 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa"} err="failed to get container status \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": rpc error: code = NotFound desc = could not find container \"98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa\": container with ID starting with 98193fc50623239b1e7db0ca744827bb6fed3ee6d43be5f533c548db1f4436aa not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.781871 4656 scope.go:117] "RemoveContainer" containerID="be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782206 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a"} err="failed to get container status \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": rpc error: code = NotFound desc = could not find container \"be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a\": container with ID starting with be872c6929cc59c9d5df94a8e8777fe31bbf1ae21257b1f47eef9ef9822eec1a not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782225 4656 scope.go:117] "RemoveContainer" containerID="8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782548 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164"} err="failed to get container status \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": rpc error: code = NotFound desc = could not find container \"8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164\": container with ID starting with 8d4a7e68adfe79db9a755a6e183c0f5e285d4f3899675b45ad331a8619040164 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782567 4656 scope.go:117] "RemoveContainer" containerID="8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782802 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2"} err="failed to get container status \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": rpc error: code = NotFound desc = could not find container \"8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2\": container with ID starting with 8eb5e1ddd0d7c9f98341ee45b6c9b63b4d7b0486e75107581752a114b44583d2 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.782829 4656 scope.go:117] "RemoveContainer" containerID="628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783154 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4"} err="failed to get container status \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": rpc error: code = NotFound desc = could not find container \"628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4\": container with ID starting with 628e60561b01f943522126042fca3053290bf9351b04be33573c4861eeb27df4 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783255 4656 scope.go:117] "RemoveContainer" containerID="5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783622 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97"} err="failed to get container status \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": rpc error: code = NotFound desc = could not find container \"5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97\": container with ID starting with 5db6e7eccb28d8b11182a9a2068b26df2e2079c99e39abf5d63137cdbd41de97 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783644 4656 scope.go:117] "RemoveContainer" containerID="c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.783994 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880"} err="failed to get container status \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": rpc error: code = NotFound desc = could not find container \"c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880\": container with ID starting with c7fa0aaecf0dc9f16a2c5c76d48bca3002e1b0e00c5c327f8089fbd3a220c880 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.784014 4656 scope.go:117] "RemoveContainer" containerID="f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.784276 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee"} err="failed to get container status \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": rpc error: code = NotFound desc = could not find container \"f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee\": container with ID starting with f61c43cc15decee8948cefdbf43adb115362d568abdc3acd87f41a401ffce3ee not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.784300 4656 scope.go:117] "RemoveContainer" containerID="25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.784591 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970"} err="failed to get container status \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": rpc error: code = NotFound desc = could not find container \"25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970\": container with ID starting with 25c029cf4061d7fa530c0572697c64341a24a890dee25a3a1ad9d2fb996b9970 not found: ID does not exist" Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.798957 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:49 crc kubenswrapper[4656]: W0128 15:30:49.818015 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod288efb1a_43ee_454e_8e5b_9a54bf7ceb3e.slice/crio-d9c0c98e8cf918cf57030c73bc0ec719174d7450d3540efe33d699ded93c2148 WatchSource:0}: Error finding container d9c0c98e8cf918cf57030c73bc0ec719174d7450d3540efe33d699ded93c2148: Status 404 returned error can't find the container with id d9c0c98e8cf918cf57030c73bc0ec719174d7450d3540efe33d699ded93c2148 Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.925507 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kwnzt"] Jan 28 15:30:49 crc kubenswrapper[4656]: I0128 15:30:49.931953 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kwnzt"] Jan 28 15:30:50 crc kubenswrapper[4656]: I0128 15:30:50.585862 4656 generic.go:334] "Generic (PLEG): container finished" podID="288efb1a-43ee-454e-8e5b-9a54bf7ceb3e" containerID="193609b139dfaf3762a1d2c25204365ccd8cc3f4584d6d79176fc0232ec1d674" exitCode=0 Jan 28 15:30:50 crc kubenswrapper[4656]: I0128 15:30:50.585959 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerDied","Data":"193609b139dfaf3762a1d2c25204365ccd8cc3f4584d6d79176fc0232ec1d674"} Jan 28 15:30:50 crc kubenswrapper[4656]: I0128 15:30:50.586322 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"d9c0c98e8cf918cf57030c73bc0ec719174d7450d3540efe33d699ded93c2148"} Jan 28 15:30:50 crc kubenswrapper[4656]: I0128 15:30:50.591641 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/2.log" Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.177680 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5748c84b-daec-4bf0-bda9-180d379ab075" path="/var/lib/kubelet/pods/5748c84b-daec-4bf0-bda9-180d379ab075/volumes" Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599825 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"894ecc8b54f498c6442f75ee0ee117f9d2f807bbea2aa9544c7bef46735db625"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599890 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"2d3503edaf413cefe3a9d08b478b707e744fce610d4d42bf9485c85c0e866cdf"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599904 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"d0499e46e6c9d828eae35a9a8b560d686e6a404557eb8a768129bdd950f87624"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599917 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"16ea70d1af063790e9a0f480da320df1c28849f2ceb8f375a1d5b1620b47afeb"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599931 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"38bc9e9cfefae341196e2052015d7707b654ab3a782f98e088c8b0f5a4f7faf0"} Jan 28 15:30:51 crc kubenswrapper[4656]: I0128 15:30:51.599944 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"a62e7e80a9e6f4f7e644b10b1bb3292e1266074ce6edd5f9402f8c2bed4d45e5"} Jan 28 15:30:54 crc kubenswrapper[4656]: I0128 15:30:54.623335 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"1415e63dd0aa4812b1040ef39795c9209f611e6b1e4fafbccda1602a4df570ed"} Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.637459 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" event={"ID":"288efb1a-43ee-454e-8e5b-9a54bf7ceb3e","Type":"ContainerStarted","Data":"481d9714377eeec0f9905ed51981b2db12a950371fb68f7fc58629b246538b6e"} Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.638825 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.638858 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.638907 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.669645 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.670311 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:30:56 crc kubenswrapper[4656]: I0128 15:30:56.694932 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" podStartSLOduration=7.694796706 podStartE2EDuration="7.694796706s" podCreationTimestamp="2026-01-28 15:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:56.673034836 +0000 UTC m=+747.181205640" watchObservedRunningTime="2026-01-28 15:30:56.694796706 +0000 UTC m=+747.202967510" Jan 28 15:31:01 crc kubenswrapper[4656]: I0128 15:31:01.172343 4656 scope.go:117] "RemoveContainer" containerID="34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd" Jan 28 15:31:01 crc kubenswrapper[4656]: E0128 15:31:01.173026 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rpzjg_openshift-multus(7662a84d-d9cb-4684-b76f-c63ffeff8344)\"" pod="openshift-multus/multus-rpzjg" podUID="7662a84d-d9cb-4684-b76f-c63ffeff8344" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.214225 4656 scope.go:117] "RemoveContainer" containerID="34fa797442b557de0e9ffab2d826f22ba8d92221e464edd57e5778604260c2bd" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.760429 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rpzjg_7662a84d-d9cb-4684-b76f-c63ffeff8344/kube-multus/2.log" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.760839 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rpzjg" event={"ID":"7662a84d-d9cb-4684-b76f-c63ffeff8344","Type":"ContainerStarted","Data":"1f2d28087e84390abd2da79385ecb0e570d437b9df6c5df422bd2d41b479a1c4"} Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.901479 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj"] Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.903134 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.911044 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.920255 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.920337 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.920385 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:14 crc kubenswrapper[4656]: I0128 15:31:14.967122 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj"] Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.027962 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.028045 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.028092 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.028724 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.028912 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.058182 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.219342 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.240383 4656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(8e1f6149b059e645054740cdf2a00a19d5559bd7bdc9c1a5b135183bf89a2b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.240540 4656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(8e1f6149b059e645054740cdf2a00a19d5559bd7bdc9c1a5b135183bf89a2b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.240586 4656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(8e1f6149b059e645054740cdf2a00a19d5559bd7bdc9c1a5b135183bf89a2b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.240678 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace(dafc02bf-d18b-4177-afa6-ac17360b54e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace(dafc02bf-d18b-4177-afa6-ac17360b54e9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(8e1f6149b059e645054740cdf2a00a19d5559bd7bdc9c1a5b135183bf89a2b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.775786 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: I0128 15:31:15.777688 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.797502 4656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(208b588a364c1a1011bebae67a2b4cbde214f44c676496dfc52db1dabb1b8907): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.797595 4656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(208b588a364c1a1011bebae67a2b4cbde214f44c676496dfc52db1dabb1b8907): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.797626 4656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(208b588a364c1a1011bebae67a2b4cbde214f44c676496dfc52db1dabb1b8907): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:15 crc kubenswrapper[4656]: E0128 15:31:15.797689 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace(dafc02bf-d18b-4177-afa6-ac17360b54e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace(dafc02bf-d18b-4177-afa6-ac17360b54e9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_openshift-marketplace_dafc02bf-d18b-4177-afa6-ac17360b54e9_0(208b588a364c1a1011bebae67a2b4cbde214f44c676496dfc52db1dabb1b8907): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" Jan 28 15:31:19 crc kubenswrapper[4656]: I0128 15:31:19.824883 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-th9gj" Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.170725 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.171750 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.380472 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj"] Jan 28 15:31:29 crc kubenswrapper[4656]: W0128 15:31:29.386732 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddafc02bf_d18b_4177_afa6_ac17360b54e9.slice/crio-63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4 WatchSource:0}: Error finding container 63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4: Status 404 returned error can't find the container with id 63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4 Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.852766 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerStarted","Data":"b23e46dcc4394338299dd14630426ac5043a4c1179cca00604ce955a0eb5ef96"} Jan 28 15:31:29 crc kubenswrapper[4656]: I0128 15:31:29.852819 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerStarted","Data":"63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4"} Jan 28 15:31:30 crc kubenswrapper[4656]: I0128 15:31:30.859582 4656 generic.go:334] "Generic (PLEG): container finished" podID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerID="b23e46dcc4394338299dd14630426ac5043a4c1179cca00604ce955a0eb5ef96" exitCode=0 Jan 28 15:31:30 crc kubenswrapper[4656]: I0128 15:31:30.859637 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerDied","Data":"b23e46dcc4394338299dd14630426ac5043a4c1179cca00604ce955a0eb5ef96"} Jan 28 15:31:33 crc kubenswrapper[4656]: I0128 15:31:33.881994 4656 generic.go:334] "Generic (PLEG): container finished" podID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerID="8b096fce66adb8bf76e69883ec65d094814540117aac1a98d3780111e5587f31" exitCode=0 Jan 28 15:31:33 crc kubenswrapper[4656]: I0128 15:31:33.882311 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerDied","Data":"8b096fce66adb8bf76e69883ec65d094814540117aac1a98d3780111e5587f31"} Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.305476 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.307057 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.318732 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.444903 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.444989 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.445047 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.545864 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.545947 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.545996 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.546548 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.546599 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.576407 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") pod \"redhat-operators-v8b7s\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.623106 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.890075 4656 generic.go:334] "Generic (PLEG): container finished" podID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerID="8c93d298af03c828c523d0a9a316b29a5a13f68151396a3ee4c3f297236cd348" exitCode=0 Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.890124 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerDied","Data":"8c93d298af03c828c523d0a9a316b29a5a13f68151396a3ee4c3f297236cd348"} Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.905234 4656 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:31:34 crc kubenswrapper[4656]: I0128 15:31:34.909433 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:34 crc kubenswrapper[4656]: W0128 15:31:34.915477 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f0ce7c_11c5_4334_b92c_ddae4644eafd.slice/crio-eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5 WatchSource:0}: Error finding container eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5: Status 404 returned error can't find the container with id eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5 Jan 28 15:31:35 crc kubenswrapper[4656]: I0128 15:31:35.896151 4656 generic.go:334] "Generic (PLEG): container finished" podID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerID="1c7cf96542fe516bc05bc2a374b423dd9244a24b1fc85dce4401caa1a519df5e" exitCode=0 Jan 28 15:31:35 crc kubenswrapper[4656]: I0128 15:31:35.896392 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerDied","Data":"1c7cf96542fe516bc05bc2a374b423dd9244a24b1fc85dce4401caa1a519df5e"} Jan 28 15:31:35 crc kubenswrapper[4656]: I0128 15:31:35.896419 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerStarted","Data":"eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5"} Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.108327 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.175218 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") pod \"dafc02bf-d18b-4177-afa6-ac17360b54e9\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.175323 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") pod \"dafc02bf-d18b-4177-afa6-ac17360b54e9\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.175357 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") pod \"dafc02bf-d18b-4177-afa6-ac17360b54e9\" (UID: \"dafc02bf-d18b-4177-afa6-ac17360b54e9\") " Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.176185 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle" (OuterVolumeSpecName: "bundle") pod "dafc02bf-d18b-4177-afa6-ac17360b54e9" (UID: "dafc02bf-d18b-4177-afa6-ac17360b54e9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.180627 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw" (OuterVolumeSpecName: "kube-api-access-9jhjw") pod "dafc02bf-d18b-4177-afa6-ac17360b54e9" (UID: "dafc02bf-d18b-4177-afa6-ac17360b54e9"). InnerVolumeSpecName "kube-api-access-9jhjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.186777 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util" (OuterVolumeSpecName: "util") pod "dafc02bf-d18b-4177-afa6-ac17360b54e9" (UID: "dafc02bf-d18b-4177-afa6-ac17360b54e9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.277071 4656 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.277150 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jhjw\" (UniqueName: \"kubernetes.io/projected/dafc02bf-d18b-4177-afa6-ac17360b54e9-kube-api-access-9jhjw\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.277189 4656 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dafc02bf-d18b-4177-afa6-ac17360b54e9-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.905270 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" event={"ID":"dafc02bf-d18b-4177-afa6-ac17360b54e9","Type":"ContainerDied","Data":"63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4"} Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.905359 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63a40a7bcf7c02adf018bb3dee2bd413ab23d334473d795da61b68f34426c7f4" Jan 28 15:31:36 crc kubenswrapper[4656]: I0128 15:31:36.906769 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj" Jan 28 15:31:37 crc kubenswrapper[4656]: I0128 15:31:37.912092 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerStarted","Data":"dd449c48f558ae08f3c1e6725aa8ce1a5913a7b3d5c5a25666c90cd1deaf89cc"} Jan 28 15:31:38 crc kubenswrapper[4656]: I0128 15:31:38.919924 4656 generic.go:334] "Generic (PLEG): container finished" podID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerID="dd449c48f558ae08f3c1e6725aa8ce1a5913a7b3d5c5a25666c90cd1deaf89cc" exitCode=0 Jan 28 15:31:38 crc kubenswrapper[4656]: I0128 15:31:38.919971 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerDied","Data":"dd449c48f558ae08f3c1e6725aa8ce1a5913a7b3d5c5a25666c90cd1deaf89cc"} Jan 28 15:31:39 crc kubenswrapper[4656]: I0128 15:31:39.927450 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerStarted","Data":"678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2"} Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.796717 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v8b7s" podStartSLOduration=4.053362416 podStartE2EDuration="6.796697199s" podCreationTimestamp="2026-01-28 15:31:34 +0000 UTC" firstStartedPulling="2026-01-28 15:31:36.909210944 +0000 UTC m=+787.417381748" lastFinishedPulling="2026-01-28 15:31:39.652545727 +0000 UTC m=+790.160716531" observedRunningTime="2026-01-28 15:31:39.947101786 +0000 UTC m=+790.455272600" watchObservedRunningTime="2026-01-28 15:31:40.796697199 +0000 UTC m=+791.304868003" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797258 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bvtmh"] Jan 28 15:31:40 crc kubenswrapper[4656]: E0128 15:31:40.797474 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="util" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797497 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="util" Jan 28 15:31:40 crc kubenswrapper[4656]: E0128 15:31:40.797517 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="pull" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797523 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="pull" Jan 28 15:31:40 crc kubenswrapper[4656]: E0128 15:31:40.797533 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="extract" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797541 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="extract" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.797655 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafc02bf-d18b-4177-afa6-ac17360b54e9" containerName="extract" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.798190 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.801087 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.801542 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.802060 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9r959" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.838600 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvdw\" (UniqueName: \"kubernetes.io/projected/66b69ecf-d4cf-452c-a311-93cecb247ab1-kube-api-access-4zvdw\") pod \"nmstate-operator-646758c888-bvtmh\" (UID: \"66b69ecf-d4cf-452c-a311-93cecb247ab1\") " pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.864237 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bvtmh"] Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.940680 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvdw\" (UniqueName: \"kubernetes.io/projected/66b69ecf-d4cf-452c-a311-93cecb247ab1-kube-api-access-4zvdw\") pod \"nmstate-operator-646758c888-bvtmh\" (UID: \"66b69ecf-d4cf-452c-a311-93cecb247ab1\") " pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:40 crc kubenswrapper[4656]: I0128 15:31:40.964053 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvdw\" (UniqueName: \"kubernetes.io/projected/66b69ecf-d4cf-452c-a311-93cecb247ab1-kube-api-access-4zvdw\") pod \"nmstate-operator-646758c888-bvtmh\" (UID: \"66b69ecf-d4cf-452c-a311-93cecb247ab1\") " pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:41 crc kubenswrapper[4656]: I0128 15:31:41.115537 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" Jan 28 15:31:41 crc kubenswrapper[4656]: I0128 15:31:41.352531 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bvtmh"] Jan 28 15:31:41 crc kubenswrapper[4656]: I0128 15:31:41.965514 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" event={"ID":"66b69ecf-d4cf-452c-a311-93cecb247ab1","Type":"ContainerStarted","Data":"a4610394493c7322c88c17b7675c7aca529bb9ec8951590d9f33069e4c0b03f4"} Jan 28 15:31:44 crc kubenswrapper[4656]: I0128 15:31:44.624287 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:44 crc kubenswrapper[4656]: I0128 15:31:44.625648 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:45 crc kubenswrapper[4656]: I0128 15:31:45.664323 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v8b7s" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" probeResult="failure" output=< Jan 28 15:31:45 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:31:45 crc kubenswrapper[4656]: > Jan 28 15:31:45 crc kubenswrapper[4656]: I0128 15:31:45.995415 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" event={"ID":"66b69ecf-d4cf-452c-a311-93cecb247ab1","Type":"ContainerStarted","Data":"e4b05892bf48c58652d437ed4aba96619d96b932ccc7a58dfa14869704a04182"} Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.023114 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-bvtmh" podStartSLOduration=2.410347537 podStartE2EDuration="6.023086424s" podCreationTimestamp="2026-01-28 15:31:40 +0000 UTC" firstStartedPulling="2026-01-28 15:31:41.359508898 +0000 UTC m=+791.867679702" lastFinishedPulling="2026-01-28 15:31:44.972247785 +0000 UTC m=+795.480418589" observedRunningTime="2026-01-28 15:31:46.016155985 +0000 UTC m=+796.524344169" watchObservedRunningTime="2026-01-28 15:31:46.023086424 +0000 UTC m=+796.531257228" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.930835 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xjdms"] Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.932107 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.936421 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x"] Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.937207 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.941635 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.942585 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mgc4r" Jan 28 15:31:46 crc kubenswrapper[4656]: I0128 15:31:46.999130 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-rjhhj"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.000041 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.004272 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xjdms"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.012495 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.090060 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7lb\" (UniqueName: \"kubernetes.io/projected/0b2d8d4a-d2ba-4c29-b545-f23070527595-kube-api-access-wm7lb\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.090126 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvnfb\" (UniqueName: \"kubernetes.io/projected/59adf6ef-2655-44f4-ae3d-91c315439598-kube-api-access-pvnfb\") pod \"nmstate-metrics-54757c584b-xjdms\" (UID: \"59adf6ef-2655-44f4-ae3d-91c315439598\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.090225 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191551 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-dbus-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191628 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm7lb\" (UniqueName: \"kubernetes.io/projected/0b2d8d4a-d2ba-4c29-b545-f23070527595-kube-api-access-wm7lb\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191799 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvnfb\" (UniqueName: \"kubernetes.io/projected/59adf6ef-2655-44f4-ae3d-91c315439598-kube-api-access-pvnfb\") pod \"nmstate-metrics-54757c584b-xjdms\" (UID: \"59adf6ef-2655-44f4-ae3d-91c315439598\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191868 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-nmstate-lock\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.191960 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.192112 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c957l\" (UniqueName: \"kubernetes.io/projected/41fa0969-44fb-4cf4-916c-da0dd393a58c-kube-api-access-c957l\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: E0128 15:31:47.192110 4656 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.192177 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-ovs-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: E0128 15:31:47.192277 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair podName:0b2d8d4a-d2ba-4c29-b545-f23070527595 nodeName:}" failed. No retries permitted until 2026-01-28 15:31:47.692228364 +0000 UTC m=+798.200399168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-rrl4x" (UID: "0b2d8d4a-d2ba-4c29-b545-f23070527595") : secret "openshift-nmstate-webhook" not found Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.223241 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm7lb\" (UniqueName: \"kubernetes.io/projected/0b2d8d4a-d2ba-4c29-b545-f23070527595-kube-api-access-wm7lb\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.234689 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvnfb\" (UniqueName: \"kubernetes.io/projected/59adf6ef-2655-44f4-ae3d-91c315439598-kube-api-access-pvnfb\") pod \"nmstate-metrics-54757c584b-xjdms\" (UID: \"59adf6ef-2655-44f4-ae3d-91c315439598\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.253776 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.278136 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.279004 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.285648 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.285953 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hng9h" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294020 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c957l\" (UniqueName: \"kubernetes.io/projected/41fa0969-44fb-4cf4-916c-da0dd393a58c-kube-api-access-c957l\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294071 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-ovs-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294113 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-dbus-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294147 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-nmstate-lock\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294262 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-nmstate-lock\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.294551 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-ovs-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.295454 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/41fa0969-44fb-4cf4-916c-da0dd393a58c-dbus-socket\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.296736 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.321870 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.326794 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c957l\" (UniqueName: \"kubernetes.io/projected/41fa0969-44fb-4cf4-916c-da0dd393a58c-kube-api-access-c957l\") pod \"nmstate-handler-rjhhj\" (UID: \"41fa0969-44fb-4cf4-916c-da0dd393a58c\") " pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.397918 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtqc\" (UniqueName: \"kubernetes.io/projected/c74f938e-5184-4ea3-afe5-373ef61c779a-kube-api-access-vrtqc\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.397976 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.398060 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c74f938e-5184-4ea3-afe5-373ef61c779a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.474454 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5567b6b6fc-hb2cx"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.475842 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.489665 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5567b6b6fc-hb2cx"] Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.504930 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtqc\" (UniqueName: \"kubernetes.io/projected/c74f938e-5184-4ea3-afe5-373ef61c779a-kube-api-access-vrtqc\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.505094 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.505291 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c74f938e-5184-4ea3-afe5-373ef61c779a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: E0128 15:31:47.513044 4656 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 15:31:47 crc kubenswrapper[4656]: E0128 15:31:47.513178 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert podName:c74f938e-5184-4ea3-afe5-373ef61c779a nodeName:}" failed. No retries permitted until 2026-01-28 15:31:48.013123369 +0000 UTC m=+798.521294183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-2bhpr" (UID: "c74f938e-5184-4ea3-afe5-373ef61c779a") : secret "plugin-serving-cert" not found Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.513892 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c74f938e-5184-4ea3-afe5-373ef61c779a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.537677 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtqc\" (UniqueName: \"kubernetes.io/projected/c74f938e-5184-4ea3-afe5-373ef61c779a-kube-api-access-vrtqc\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615221 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-service-ca\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615293 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-oauth-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615324 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fkz8\" (UniqueName: \"kubernetes.io/projected/b7c707ef-4b20-402f-9a10-d982dd0f59dd-kube-api-access-8fkz8\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615341 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-trusted-ca-bundle\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615358 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615380 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.615404 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-oauth-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.623316 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.625609 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xjdms"] Jan 28 15:31:47 crc kubenswrapper[4656]: W0128 15:31:47.659207 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41fa0969_44fb_4cf4_916c_da0dd393a58c.slice/crio-eed67e51a9408ec961702fc368dbf4a559930f33c336cc7a646315d69ef4e78b WatchSource:0}: Error finding container eed67e51a9408ec961702fc368dbf4a559930f33c336cc7a646315d69ef4e78b: Status 404 returned error can't find the container with id eed67e51a9408ec961702fc368dbf4a559930f33c336cc7a646315d69ef4e78b Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.716861 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.716926 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-service-ca\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.716994 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-oauth-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717021 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fkz8\" (UniqueName: \"kubernetes.io/projected/b7c707ef-4b20-402f-9a10-d982dd0f59dd-kube-api-access-8fkz8\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717045 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-trusted-ca-bundle\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717075 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717116 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.717155 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-oauth-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.718050 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-oauth-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.718116 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-service-ca\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.718881 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.719347 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7c707ef-4b20-402f-9a10-d982dd0f59dd-trusted-ca-bundle\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.723659 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0b2d8d4a-d2ba-4c29-b545-f23070527595-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rrl4x\" (UID: \"0b2d8d4a-d2ba-4c29-b545-f23070527595\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.723697 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-oauth-config\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.723748 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c707ef-4b20-402f-9a10-d982dd0f59dd-console-serving-cert\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.737047 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fkz8\" (UniqueName: \"kubernetes.io/projected/b7c707ef-4b20-402f-9a10-d982dd0f59dd-kube-api-access-8fkz8\") pod \"console-5567b6b6fc-hb2cx\" (UID: \"b7c707ef-4b20-402f-9a10-d982dd0f59dd\") " pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.814666 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:47 crc kubenswrapper[4656]: I0128 15:31:47.866321 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.015465 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rjhhj" event={"ID":"41fa0969-44fb-4cf4-916c-da0dd393a58c","Type":"ContainerStarted","Data":"eed67e51a9408ec961702fc368dbf4a559930f33c336cc7a646315d69ef4e78b"} Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.016846 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" event={"ID":"59adf6ef-2655-44f4-ae3d-91c315439598","Type":"ContainerStarted","Data":"f9b271005704877eca705e22367b2c57c420ed1fb5659025a900e0896db90b06"} Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.025928 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.037805 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74f938e-5184-4ea3-afe5-373ef61c779a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2bhpr\" (UID: \"c74f938e-5184-4ea3-afe5-373ef61c779a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:48 crc kubenswrapper[4656]: W0128 15:31:48.063099 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7c707ef_4b20_402f_9a10_d982dd0f59dd.slice/crio-a35ee7354f42f7415c87fe0e1d6ea521b2d1ffa71875c1c1f0dd87b545a9e96d WatchSource:0}: Error finding container a35ee7354f42f7415c87fe0e1d6ea521b2d1ffa71875c1c1f0dd87b545a9e96d: Status 404 returned error can't find the container with id a35ee7354f42f7415c87fe0e1d6ea521b2d1ffa71875c1c1f0dd87b545a9e96d Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.064408 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5567b6b6fc-hb2cx"] Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.118370 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x"] Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.252674 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" Jan 28 15:31:48 crc kubenswrapper[4656]: I0128 15:31:48.479582 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr"] Jan 28 15:31:48 crc kubenswrapper[4656]: W0128 15:31:48.486406 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc74f938e_5184_4ea3_afe5_373ef61c779a.slice/crio-1738a1ceb6cd6e4216e2eb6851a2389489a7b6a9317e2b6f5dab57cf12d5a756 WatchSource:0}: Error finding container 1738a1ceb6cd6e4216e2eb6851a2389489a7b6a9317e2b6f5dab57cf12d5a756: Status 404 returned error can't find the container with id 1738a1ceb6cd6e4216e2eb6851a2389489a7b6a9317e2b6f5dab57cf12d5a756 Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.087337 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" event={"ID":"0b2d8d4a-d2ba-4c29-b545-f23070527595","Type":"ContainerStarted","Data":"41632c767cea4fd4b10bc5a7c8b852338afedba5f28015b6985fdbe94d1072e4"} Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.093624 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5567b6b6fc-hb2cx" event={"ID":"b7c707ef-4b20-402f-9a10-d982dd0f59dd","Type":"ContainerStarted","Data":"27cd4e3de32b971b6b9feeb29c3013eebdef15c0b0081aeb3f894522ea0c9bf7"} Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.093707 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5567b6b6fc-hb2cx" event={"ID":"b7c707ef-4b20-402f-9a10-d982dd0f59dd","Type":"ContainerStarted","Data":"a35ee7354f42f7415c87fe0e1d6ea521b2d1ffa71875c1c1f0dd87b545a9e96d"} Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.095667 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" event={"ID":"c74f938e-5184-4ea3-afe5-373ef61c779a","Type":"ContainerStarted","Data":"1738a1ceb6cd6e4216e2eb6851a2389489a7b6a9317e2b6f5dab57cf12d5a756"} Jan 28 15:31:49 crc kubenswrapper[4656]: I0128 15:31:49.131869 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5567b6b6fc-hb2cx" podStartSLOduration=2.131844092 podStartE2EDuration="2.131844092s" podCreationTimestamp="2026-01-28 15:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:31:49.121422183 +0000 UTC m=+799.629592997" watchObservedRunningTime="2026-01-28 15:31:49.131844092 +0000 UTC m=+799.640014896" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.150289 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" event={"ID":"0b2d8d4a-d2ba-4c29-b545-f23070527595","Type":"ContainerStarted","Data":"974bf24ad54875ac334c30e845e0d22e05bda2e69b7b25fb3df9867da0612917"} Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.150991 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.154800 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" event={"ID":"59adf6ef-2655-44f4-ae3d-91c315439598","Type":"ContainerStarted","Data":"e84e9ab9b83c016813a6f889eeca8cc888bacc714560990232a8892e4afeebf8"} Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.156400 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" event={"ID":"c74f938e-5184-4ea3-afe5-373ef61c779a","Type":"ContainerStarted","Data":"3df4a821d56cbb30c99bbd635c36b1b308aa7a7a6914c34862dc31778c14d3e3"} Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.158540 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rjhhj" event={"ID":"41fa0969-44fb-4cf4-916c-da0dd393a58c","Type":"ContainerStarted","Data":"e8961c37a6a911378d40ea3b26cad907348f7f0b2bb60c3acf40ab5d5bb7b1dc"} Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.158749 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.174901 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" podStartSLOduration=3.075339869 podStartE2EDuration="8.174869157s" podCreationTimestamp="2026-01-28 15:31:46 +0000 UTC" firstStartedPulling="2026-01-28 15:31:48.168748516 +0000 UTC m=+798.676919330" lastFinishedPulling="2026-01-28 15:31:53.268277804 +0000 UTC m=+803.776448618" observedRunningTime="2026-01-28 15:31:54.171789408 +0000 UTC m=+804.679960212" watchObservedRunningTime="2026-01-28 15:31:54.174869157 +0000 UTC m=+804.683039961" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.199284 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-rjhhj" podStartSLOduration=2.56192045 podStartE2EDuration="8.199237167s" podCreationTimestamp="2026-01-28 15:31:46 +0000 UTC" firstStartedPulling="2026-01-28 15:31:47.666125178 +0000 UTC m=+798.174295982" lastFinishedPulling="2026-01-28 15:31:53.303441895 +0000 UTC m=+803.811612699" observedRunningTime="2026-01-28 15:31:54.194743278 +0000 UTC m=+804.702914082" watchObservedRunningTime="2026-01-28 15:31:54.199237167 +0000 UTC m=+804.707407971" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.227377 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2bhpr" podStartSLOduration=2.447094355 podStartE2EDuration="7.227065127s" podCreationTimestamp="2026-01-28 15:31:47 +0000 UTC" firstStartedPulling="2026-01-28 15:31:48.488026704 +0000 UTC m=+798.996197508" lastFinishedPulling="2026-01-28 15:31:53.267997476 +0000 UTC m=+803.776168280" observedRunningTime="2026-01-28 15:31:54.221668902 +0000 UTC m=+804.729839716" watchObservedRunningTime="2026-01-28 15:31:54.227065127 +0000 UTC m=+804.735235961" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.664518 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.708895 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:54 crc kubenswrapper[4656]: I0128 15:31:54.901778 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:56 crc kubenswrapper[4656]: I0128 15:31:56.176553 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v8b7s" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" containerID="cri-o://678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2" gracePeriod=2 Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.190258 4656 generic.go:334] "Generic (PLEG): container finished" podID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerID="678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2" exitCode=0 Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.190311 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerDied","Data":"678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2"} Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.815226 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.815293 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.821026 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:57 crc kubenswrapper[4656]: I0128 15:31:57.897546 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.072845 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") pod \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.072953 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") pod \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.073120 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") pod \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\" (UID: \"27f0ce7c-11c5-4334-b92c-ddae4644eafd\") " Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.074550 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities" (OuterVolumeSpecName: "utilities") pod "27f0ce7c-11c5-4334-b92c-ddae4644eafd" (UID: "27f0ce7c-11c5-4334-b92c-ddae4644eafd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.080069 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf" (OuterVolumeSpecName: "kube-api-access-v27mf") pod "27f0ce7c-11c5-4334-b92c-ddae4644eafd" (UID: "27f0ce7c-11c5-4334-b92c-ddae4644eafd"). InnerVolumeSpecName "kube-api-access-v27mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.174366 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.174398 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v27mf\" (UniqueName: \"kubernetes.io/projected/27f0ce7c-11c5-4334-b92c-ddae4644eafd-kube-api-access-v27mf\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.198430 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8b7s" event={"ID":"27f0ce7c-11c5-4334-b92c-ddae4644eafd","Type":"ContainerDied","Data":"eb8e5dcfdab5dac41b6867a6429f4754a4195f65d6c6026801b73a87a7fe47c5"} Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.198495 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8b7s" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.198520 4656 scope.go:117] "RemoveContainer" containerID="678c7a77544aff44ae0b905161c93e3879eef66dcec39898bf652f456ae0a4d2" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.200549 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" event={"ID":"59adf6ef-2655-44f4-ae3d-91c315439598","Type":"ContainerStarted","Data":"ed75f2451fe9a22f75612d489fe154ff9974b2e69f4ebbdff65167e538418c65"} Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.204344 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5567b6b6fc-hb2cx" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.221147 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27f0ce7c-11c5-4334-b92c-ddae4644eafd" (UID: "27f0ce7c-11c5-4334-b92c-ddae4644eafd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.221469 4656 scope.go:117] "RemoveContainer" containerID="dd449c48f558ae08f3c1e6725aa8ce1a5913a7b3d5c5a25666c90cd1deaf89cc" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.265448 4656 scope.go:117] "RemoveContainer" containerID="1c7cf96542fe516bc05bc2a374b423dd9244a24b1fc85dce4401caa1a519df5e" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.278435 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f0ce7c-11c5-4334-b92c-ddae4644eafd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.291655 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.532262 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:58 crc kubenswrapper[4656]: I0128 15:31:58.536368 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v8b7s"] Jan 28 15:31:59 crc kubenswrapper[4656]: I0128 15:31:59.177633 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" path="/var/lib/kubelet/pods/27f0ce7c-11c5-4334-b92c-ddae4644eafd/volumes" Jan 28 15:31:59 crc kubenswrapper[4656]: I0128 15:31:59.222996 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-xjdms" podStartSLOduration=2.959040195 podStartE2EDuration="13.222958315s" podCreationTimestamp="2026-01-28 15:31:46 +0000 UTC" firstStartedPulling="2026-01-28 15:31:47.647385749 +0000 UTC m=+798.155556563" lastFinishedPulling="2026-01-28 15:31:57.911303879 +0000 UTC m=+808.419474683" observedRunningTime="2026-01-28 15:31:59.222324427 +0000 UTC m=+809.730495291" watchObservedRunningTime="2026-01-28 15:31:59.222958315 +0000 UTC m=+809.731129159" Jan 28 15:32:02 crc kubenswrapper[4656]: I0128 15:32:02.644262 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-rjhhj" Jan 28 15:32:07 crc kubenswrapper[4656]: I0128 15:32:07.874067 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rrl4x" Jan 28 15:32:11 crc kubenswrapper[4656]: I0128 15:32:11.263754 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:32:11 crc kubenswrapper[4656]: I0128 15:32:11.264071 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.904577 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d"] Jan 28 15:32:20 crc kubenswrapper[4656]: E0128 15:32:20.905408 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="extract-content" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.905428 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="extract-content" Jan 28 15:32:20 crc kubenswrapper[4656]: E0128 15:32:20.905452 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="extract-utilities" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.905460 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="extract-utilities" Jan 28 15:32:20 crc kubenswrapper[4656]: E0128 15:32:20.905478 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.905486 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.905657 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f0ce7c-11c5-4334-b92c-ddae4644eafd" containerName="registry-server" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.906898 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.911426 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:32:20 crc kubenswrapper[4656]: I0128 15:32:20.922715 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d"] Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.068851 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.069238 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.069398 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.170297 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.170351 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.170431 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.170975 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.171087 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.190523 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.223864 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:21 crc kubenswrapper[4656]: I0128 15:32:21.814201 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d"] Jan 28 15:32:22 crc kubenswrapper[4656]: I0128 15:32:22.348528 4656 generic.go:334] "Generic (PLEG): container finished" podID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerID="8fc9f26efd0e5690cf8d68d433b9bd3eed36dbfd3955ebb0b4c98ddbda8996bb" exitCode=0 Jan 28 15:32:22 crc kubenswrapper[4656]: I0128 15:32:22.348681 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerDied","Data":"8fc9f26efd0e5690cf8d68d433b9bd3eed36dbfd3955ebb0b4c98ddbda8996bb"} Jan 28 15:32:22 crc kubenswrapper[4656]: I0128 15:32:22.348867 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerStarted","Data":"04a156875da8dc6d4d7efec6722544b9e8f5bd11a721782c1b47bdebab3a0813"} Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.339410 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-jrkdc" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" containerID="cri-o://6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" gracePeriod=15 Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.701506 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jrkdc_acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f/console/0.log" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.701858 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904490 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904600 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904639 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904683 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904723 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904750 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.904806 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") pod \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\" (UID: \"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f\") " Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.905763 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.905777 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.906034 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca" (OuterVolumeSpecName: "service-ca") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.906064 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config" (OuterVolumeSpecName: "console-config") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.911278 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.911385 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb" (OuterVolumeSpecName: "kube-api-access-jrjsb") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "kube-api-access-jrjsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:23 crc kubenswrapper[4656]: I0128 15:32:23.916254 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" (UID: "acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006273 4656 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006329 4656 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006404 4656 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006419 4656 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006430 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrjsb\" (UniqueName: \"kubernetes.io/projected/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-kube-api-access-jrjsb\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006445 4656 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.006455 4656 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.364948 4656 generic.go:334] "Generic (PLEG): container finished" podID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerID="48f4a658635ea51f802fd36a517cb4763602c51a7d8cf54a8212b9ed252b1e88" exitCode=0 Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.365026 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerDied","Data":"48f4a658635ea51f802fd36a517cb4763602c51a7d8cf54a8212b9ed252b1e88"} Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.370040 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jrkdc_acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f/console/0.log" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.370293 4656 generic.go:334] "Generic (PLEG): container finished" podID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerID="6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" exitCode=2 Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.370484 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jrkdc" event={"ID":"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f","Type":"ContainerDied","Data":"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8"} Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.370908 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jrkdc" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.372110 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jrkdc" event={"ID":"acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f","Type":"ContainerDied","Data":"c41207c51ae6b24c926215e0f0025b63660d715fc02590a179969ca36e581017"} Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.372197 4656 scope.go:117] "RemoveContainer" containerID="6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.411361 4656 scope.go:117] "RemoveContainer" containerID="6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" Jan 28 15:32:24 crc kubenswrapper[4656]: E0128 15:32:24.413201 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8\": container with ID starting with 6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8 not found: ID does not exist" containerID="6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.413372 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8"} err="failed to get container status \"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8\": rpc error: code = NotFound desc = could not find container \"6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8\": container with ID starting with 6b93288ef7baae99bfa1f0ed066f4be7df55da5e12d662c8ade21c0e5c6b35a8 not found: ID does not exist" Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.446615 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:32:24 crc kubenswrapper[4656]: I0128 15:32:24.454231 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-jrkdc"] Jan 28 15:32:25 crc kubenswrapper[4656]: I0128 15:32:25.178398 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" path="/var/lib/kubelet/pods/acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f/volumes" Jan 28 15:32:25 crc kubenswrapper[4656]: I0128 15:32:25.378878 4656 generic.go:334] "Generic (PLEG): container finished" podID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerID="ea7efe11676daa7d911cdd649195885da2ae75e7f58d2e0c4b126f7cf35c0839" exitCode=0 Jan 28 15:32:25 crc kubenswrapper[4656]: I0128 15:32:25.378936 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerDied","Data":"ea7efe11676daa7d911cdd649195885da2ae75e7f58d2e0c4b126f7cf35c0839"} Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.572460 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.740484 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") pod \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.740612 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") pod \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.740663 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") pod \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\" (UID: \"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f\") " Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.741989 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle" (OuterVolumeSpecName: "bundle") pod "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" (UID: "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.747232 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v" (OuterVolumeSpecName: "kube-api-access-28z9v") pod "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" (UID: "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f"). InnerVolumeSpecName "kube-api-access-28z9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.756128 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util" (OuterVolumeSpecName: "util") pod "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" (UID: "f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.842698 4656 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.842757 4656 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:26 crc kubenswrapper[4656]: I0128 15:32:26.842770 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28z9v\" (UniqueName: \"kubernetes.io/projected/f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f-kube-api-access-28z9v\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:27 crc kubenswrapper[4656]: I0128 15:32:27.393638 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" event={"ID":"f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f","Type":"ContainerDied","Data":"04a156875da8dc6d4d7efec6722544b9e8f5bd11a721782c1b47bdebab3a0813"} Jan 28 15:32:27 crc kubenswrapper[4656]: I0128 15:32:27.393690 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d" Jan 28 15:32:27 crc kubenswrapper[4656]: I0128 15:32:27.393695 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a156875da8dc6d4d7efec6722544b9e8f5bd11a721782c1b47bdebab3a0813" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.871036 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk"] Jan 28 15:32:35 crc kubenswrapper[4656]: E0128 15:32:35.871976 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.871998 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" Jan 28 15:32:35 crc kubenswrapper[4656]: E0128 15:32:35.872028 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="pull" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872036 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="pull" Jan 28 15:32:35 crc kubenswrapper[4656]: E0128 15:32:35.872054 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="extract" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872063 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="extract" Jan 28 15:32:35 crc kubenswrapper[4656]: E0128 15:32:35.872074 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="util" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872088 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="util" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872261 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f" containerName="extract" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872295 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd5c0d8-8e06-4dfe-9e89-fd89b194ec1f" containerName="console" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.872864 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877561 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-apiservice-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877632 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pmr\" (UniqueName: \"kubernetes.io/projected/93bc850d-d691-43b6-8668-79f21bd350a7-kube-api-access-n9pmr\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877614 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-sjnks" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877734 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-webhook-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.877955 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.878333 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.878600 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.883021 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.906387 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk"] Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.978834 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pmr\" (UniqueName: \"kubernetes.io/projected/93bc850d-d691-43b6-8668-79f21bd350a7-kube-api-access-n9pmr\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.979145 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-apiservice-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.979196 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-webhook-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.991148 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-webhook-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:35 crc kubenswrapper[4656]: I0128 15:32:35.992858 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93bc850d-d691-43b6-8668-79f21bd350a7-apiservice-cert\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.023689 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pmr\" (UniqueName: \"kubernetes.io/projected/93bc850d-d691-43b6-8668-79f21bd350a7-kube-api-access-n9pmr\") pod \"metallb-operator-controller-manager-dfcddcb8c-gjtgk\" (UID: \"93bc850d-d691-43b6-8668-79f21bd350a7\") " pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.219502 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf"] Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.220303 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.222497 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.222694 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.222929 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wjfks" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.238456 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf"] Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.264714 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.289188 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-webhook-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.289709 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-apiservice-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.289735 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tz5\" (UniqueName: \"kubernetes.io/projected/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-kube-api-access-66tz5\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.390537 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-webhook-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.390634 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tz5\" (UniqueName: \"kubernetes.io/projected/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-kube-api-access-66tz5\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.390659 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-apiservice-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.396327 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-webhook-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.396557 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-apiservice-cert\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.417099 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tz5\" (UniqueName: \"kubernetes.io/projected/585f1e9a-4070-4b23-bbab-f29ae7e95cf0-kube-api-access-66tz5\") pod \"metallb-operator-webhook-server-7bdb79d58b-gsggf\" (UID: \"585f1e9a-4070-4b23-bbab-f29ae7e95cf0\") " pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.534872 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:36 crc kubenswrapper[4656]: I0128 15:32:36.959582 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk"] Jan 28 15:32:37 crc kubenswrapper[4656]: I0128 15:32:37.022965 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf"] Jan 28 15:32:37 crc kubenswrapper[4656]: I0128 15:32:37.464408 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" event={"ID":"93bc850d-d691-43b6-8668-79f21bd350a7","Type":"ContainerStarted","Data":"f2cb169b066cbe2b299e58230534327fe8069c2932807d646939a80cdbd29711"} Jan 28 15:32:37 crc kubenswrapper[4656]: I0128 15:32:37.465599 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" event={"ID":"585f1e9a-4070-4b23-bbab-f29ae7e95cf0","Type":"ContainerStarted","Data":"cbdc95b54efd7551c4b7ae1958c6954302acf696c7334e5d96b0d2dd380bce4b"} Jan 28 15:32:41 crc kubenswrapper[4656]: I0128 15:32:41.266421 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:32:41 crc kubenswrapper[4656]: I0128 15:32:41.266897 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.512093 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" event={"ID":"585f1e9a-4070-4b23-bbab-f29ae7e95cf0","Type":"ContainerStarted","Data":"4e788ec25cbde59f95b43247b7e7916e313a289e710f56b496571ba1e9e211a3"} Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.512487 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.514394 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" event={"ID":"93bc850d-d691-43b6-8668-79f21bd350a7","Type":"ContainerStarted","Data":"af4b6cb74f09e3294b44bc71e97e4cc4b094427f814f54529716e6a5730df9c7"} Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.514696 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.538137 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" podStartSLOduration=1.35362147 podStartE2EDuration="7.538115067s" podCreationTimestamp="2026-01-28 15:32:36 +0000 UTC" firstStartedPulling="2026-01-28 15:32:37.035463274 +0000 UTC m=+847.543634088" lastFinishedPulling="2026-01-28 15:32:43.219956871 +0000 UTC m=+853.728127685" observedRunningTime="2026-01-28 15:32:43.532911468 +0000 UTC m=+854.041082292" watchObservedRunningTime="2026-01-28 15:32:43.538115067 +0000 UTC m=+854.046285891" Jan 28 15:32:43 crc kubenswrapper[4656]: I0128 15:32:43.554139 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" podStartSLOduration=2.350229032 podStartE2EDuration="8.554115737s" podCreationTimestamp="2026-01-28 15:32:35 +0000 UTC" firstStartedPulling="2026-01-28 15:32:36.999095668 +0000 UTC m=+847.507266472" lastFinishedPulling="2026-01-28 15:32:43.202982373 +0000 UTC m=+853.711153177" observedRunningTime="2026-01-28 15:32:43.551864263 +0000 UTC m=+854.060035087" watchObservedRunningTime="2026-01-28 15:32:43.554115737 +0000 UTC m=+854.062286541" Jan 28 15:32:56 crc kubenswrapper[4656]: I0128 15:32:56.539972 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7bdb79d58b-gsggf" Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.265599 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.266323 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.266416 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.267145 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.267243 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0" gracePeriod=600 Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.683029 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0" exitCode=0 Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.683088 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0"} Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.683431 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f"} Jan 28 15:33:11 crc kubenswrapper[4656]: I0128 15:33:11.683494 4656 scope.go:117] "RemoveContainer" containerID="c69beb7ab8edbd918c480179277219ae11258f52e8862dd697c2421ee64e9af1" Jan 28 15:33:16 crc kubenswrapper[4656]: I0128 15:33:16.267566 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-dfcddcb8c-gjtgk" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.004563 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z94g8"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.007525 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.016193 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-t7bxm" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.017010 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.020910 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.024255 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.025004 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.026918 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.058906 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113078 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkdx\" (UniqueName: \"kubernetes.io/projected/e31100cc-2c8a-4682-b6fb-acc4157f7d43-kube-api-access-mzkdx\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113136 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113686 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-conf\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113748 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-reloader\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.113786 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics-certs\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.114024 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-startup\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.114228 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-sockets\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.142528 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-k4qr2"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.143479 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.145182 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.146484 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.151248 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.153059 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dm6w7" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.164958 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-xgjbl"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.165908 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.171133 4656 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.190029 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xgjbl"] Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215537 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzkdx\" (UniqueName: \"kubernetes.io/projected/e31100cc-2c8a-4682-b6fb-acc4157f7d43-kube-api-access-mzkdx\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215583 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215612 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-cert\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215639 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/555938b5-7504-41e4-9331-7be899491299-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215663 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metallb-excludel2\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215690 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215709 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-conf\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215729 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-reloader\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215752 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215788 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics-certs\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215819 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw2c7\" (UniqueName: \"kubernetes.io/projected/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-kube-api-access-mw2c7\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215854 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jpd2\" (UniqueName: \"kubernetes.io/projected/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-kube-api-access-8jpd2\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215876 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-startup\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215906 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q72k\" (UniqueName: \"kubernetes.io/projected/555938b5-7504-41e4-9331-7be899491299-kube-api-access-9q72k\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215964 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-sockets\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.215987 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.216776 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.217040 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-conf\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.221468 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-reloader\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.222863 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-sockets\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.223061 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e31100cc-2c8a-4682-b6fb-acc4157f7d43-frr-startup\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.240856 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e31100cc-2c8a-4682-b6fb-acc4157f7d43-metrics-certs\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.248810 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzkdx\" (UniqueName: \"kubernetes.io/projected/e31100cc-2c8a-4682-b6fb-acc4157f7d43-kube-api-access-mzkdx\") pod \"frr-k8s-z94g8\" (UID: \"e31100cc-2c8a-4682-b6fb-acc4157f7d43\") " pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316316 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q72k\" (UniqueName: \"kubernetes.io/projected/555938b5-7504-41e4-9331-7be899491299-kube-api-access-9q72k\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316370 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316411 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-cert\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316428 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/555938b5-7504-41e4-9331-7be899491299-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316454 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metallb-excludel2\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316474 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316503 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316526 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw2c7\" (UniqueName: \"kubernetes.io/projected/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-kube-api-access-mw2c7\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.316549 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jpd2\" (UniqueName: \"kubernetes.io/projected/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-kube-api-access-8jpd2\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.316663 4656 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.316777 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist podName:8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:17.816734851 +0000 UTC m=+888.324905655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist") pod "speaker-k4qr2" (UID: "8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3") : secret "metallb-memberlist" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.316949 4656 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.317008 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs podName:8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:17.816985208 +0000 UTC m=+888.325156022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs") pod "speaker-k4qr2" (UID: "8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3") : secret "speaker-certs-secret" not found Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.317254 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metallb-excludel2\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.317449 4656 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.317510 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs podName:a0c56151-b07f-4c02-9b4c-0b48c4dd8a03 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:17.817489673 +0000 UTC m=+888.325660567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs") pod "controller-6968d8fdc4-xgjbl" (UID: "a0c56151-b07f-4c02-9b4c-0b48c4dd8a03") : secret "controller-certs-secret" not found Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.319824 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-cert\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.324189 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.329722 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/555938b5-7504-41e4-9331-7be899491299-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.337647 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw2c7\" (UniqueName: \"kubernetes.io/projected/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-kube-api-access-mw2c7\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.338137 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q72k\" (UniqueName: \"kubernetes.io/projected/555938b5-7504-41e4-9331-7be899491299-kube-api-access-9q72k\") pod \"frr-k8s-webhook-server-7df86c4f6c-nzvcq\" (UID: \"555938b5-7504-41e4-9331-7be899491299\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.338803 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.349793 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jpd2\" (UniqueName: \"kubernetes.io/projected/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-kube-api-access-8jpd2\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.721851 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"6f68746b3d9f4d682982276809a2c870c5d38fbb0d71ec0397a4adbbc8a1f3a9"} Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.821966 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.822016 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.822062 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.822335 4656 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:33:17 crc kubenswrapper[4656]: E0128 15:33:17.822435 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist podName:8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:18.822411067 +0000 UTC m=+889.330581871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist") pod "speaker-k4qr2" (UID: "8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3") : secret "metallb-memberlist" not found Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.823454 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq"] Jan 28 15:33:17 crc kubenswrapper[4656]: W0128 15:33:17.826538 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555938b5_7504_41e4_9331_7be899491299.slice/crio-4a68f55507fac9296298d244fe285dcf21f01581d58854197c428e602c4159a4 WatchSource:0}: Error finding container 4a68f55507fac9296298d244fe285dcf21f01581d58854197c428e602c4159a4: Status 404 returned error can't find the container with id 4a68f55507fac9296298d244fe285dcf21f01581d58854197c428e602c4159a4 Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.827496 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0c56151-b07f-4c02-9b4c-0b48c4dd8a03-metrics-certs\") pod \"controller-6968d8fdc4-xgjbl\" (UID: \"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03\") " pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:17 crc kubenswrapper[4656]: I0128 15:33:17.829622 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-metrics-certs\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.083225 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.399667 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-xgjbl"] Jan 28 15:33:18 crc kubenswrapper[4656]: W0128 15:33:18.421565 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c56151_b07f_4c02_9b4c_0b48c4dd8a03.slice/crio-80946ce0751326de500df6bead81877dda7103177517d0515992ccc75d986a34 WatchSource:0}: Error finding container 80946ce0751326de500df6bead81877dda7103177517d0515992ccc75d986a34: Status 404 returned error can't find the container with id 80946ce0751326de500df6bead81877dda7103177517d0515992ccc75d986a34 Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.730025 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xgjbl" event={"ID":"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03","Type":"ContainerStarted","Data":"340f8d38554e114e7a7a00e32c5fe17afb2c5c48a7b9d6800224e0c2c45659b2"} Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.730082 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xgjbl" event={"ID":"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03","Type":"ContainerStarted","Data":"80946ce0751326de500df6bead81877dda7103177517d0515992ccc75d986a34"} Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.732014 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" event={"ID":"555938b5-7504-41e4-9331-7be899491299","Type":"ContainerStarted","Data":"4a68f55507fac9296298d244fe285dcf21f01581d58854197c428e602c4159a4"} Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.833121 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.840878 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3-memberlist\") pod \"speaker-k4qr2\" (UID: \"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3\") " pod="metallb-system/speaker-k4qr2" Jan 28 15:33:18 crc kubenswrapper[4656]: I0128 15:33:18.957644 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k4qr2" Jan 28 15:33:18 crc kubenswrapper[4656]: W0128 15:33:18.998443 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ffb0cda_bbe7_41e1_acdb_fd11fd2e33a3.slice/crio-11ec395aaa4435a9365d4a82acf6e69d481f9f73f7abc6b8cb98a17d61b467e8 WatchSource:0}: Error finding container 11ec395aaa4435a9365d4a82acf6e69d481f9f73f7abc6b8cb98a17d61b467e8: Status 404 returned error can't find the container with id 11ec395aaa4435a9365d4a82acf6e69d481f9f73f7abc6b8cb98a17d61b467e8 Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.746853 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-xgjbl" event={"ID":"a0c56151-b07f-4c02-9b4c-0b48c4dd8a03","Type":"ContainerStarted","Data":"39153442b29f1725a200a9d891027834262f56487c54efcb36c224a6f09c2d89"} Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.747422 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.755433 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k4qr2" event={"ID":"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3","Type":"ContainerStarted","Data":"4b62abfac82e167105d32e08a464d055b30fd488f25c8892e522441ace3728f0"} Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.755473 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k4qr2" event={"ID":"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3","Type":"ContainerStarted","Data":"2f3b90df056f8691c3a39b27f294c2fab0f582186cc819bda7de16d24bc0d72f"} Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.755483 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k4qr2" event={"ID":"8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3","Type":"ContainerStarted","Data":"11ec395aaa4435a9365d4a82acf6e69d481f9f73f7abc6b8cb98a17d61b467e8"} Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.755912 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-k4qr2" Jan 28 15:33:19 crc kubenswrapper[4656]: I0128 15:33:19.826639 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-xgjbl" podStartSLOduration=2.826616583 podStartE2EDuration="2.826616583s" podCreationTimestamp="2026-01-28 15:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:19.793647316 +0000 UTC m=+890.301818120" watchObservedRunningTime="2026-01-28 15:33:19.826616583 +0000 UTC m=+890.334787387" Jan 28 15:33:21 crc kubenswrapper[4656]: I0128 15:33:21.195325 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-k4qr2" podStartSLOduration=4.195303349 podStartE2EDuration="4.195303349s" podCreationTimestamp="2026-01-28 15:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:19.825004747 +0000 UTC m=+890.333175571" watchObservedRunningTime="2026-01-28 15:33:21.195303349 +0000 UTC m=+891.703474153" Jan 28 15:33:28 crc kubenswrapper[4656]: I0128 15:33:28.088908 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-xgjbl" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.766964 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.769536 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.792243 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.840888 4656 generic.go:334] "Generic (PLEG): container finished" podID="e31100cc-2c8a-4682-b6fb-acc4157f7d43" containerID="8d01c29fd96b6a245f92a36e09da008a30e90cba3803ead32b6a1c660cf0ea11" exitCode=0 Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.840991 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerDied","Data":"8d01c29fd96b6a245f92a36e09da008a30e90cba3803ead32b6a1c660cf0ea11"} Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.843208 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" event={"ID":"555938b5-7504-41e4-9331-7be899491299","Type":"ContainerStarted","Data":"488c5139fcb911039a92150deb064cf9e317ad744e75620626c206f313505bfc"} Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.843355 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.896393 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.896503 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.896584 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.946655 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" podStartSLOduration=2.9299799479999997 podStartE2EDuration="14.946623914s" podCreationTimestamp="2026-01-28 15:33:16 +0000 UTC" firstStartedPulling="2026-01-28 15:33:17.828963286 +0000 UTC m=+888.337134090" lastFinishedPulling="2026-01-28 15:33:29.845607252 +0000 UTC m=+900.353778056" observedRunningTime="2026-01-28 15:33:30.943965887 +0000 UTC m=+901.452136691" watchObservedRunningTime="2026-01-28 15:33:30.946623914 +0000 UTC m=+901.454794718" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.998138 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.998269 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.998327 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:30 crc kubenswrapper[4656]: I0128 15:33:30.999381 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.000363 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.022311 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") pod \"community-operators-d6s8p\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.083965 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.666704 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:31 crc kubenswrapper[4656]: W0128 15:33:31.671859 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaea8db27_38e0_4ac8_8835_b9d701c8f230.slice/crio-c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0 WatchSource:0}: Error finding container c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0: Status 404 returned error can't find the container with id c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0 Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.852466 4656 generic.go:334] "Generic (PLEG): container finished" podID="e31100cc-2c8a-4682-b6fb-acc4157f7d43" containerID="bec3cc61215aa6c2b131a7f50243ebbd5b165c44cf464a132df799820604071a" exitCode=0 Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.854110 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerDied","Data":"bec3cc61215aa6c2b131a7f50243ebbd5b165c44cf464a132df799820604071a"} Jan 28 15:33:31 crc kubenswrapper[4656]: I0128 15:33:31.856055 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerStarted","Data":"c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0"} Jan 28 15:33:32 crc kubenswrapper[4656]: I0128 15:33:32.862683 4656 generic.go:334] "Generic (PLEG): container finished" podID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerID="fe8b322c739039fd04ab424c28e05ffbfdd4af0e21a26d5ce1fb07d9670baa75" exitCode=0 Jan 28 15:33:32 crc kubenswrapper[4656]: I0128 15:33:32.862772 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerDied","Data":"fe8b322c739039fd04ab424c28e05ffbfdd4af0e21a26d5ce1fb07d9670baa75"} Jan 28 15:33:32 crc kubenswrapper[4656]: I0128 15:33:32.866562 4656 generic.go:334] "Generic (PLEG): container finished" podID="e31100cc-2c8a-4682-b6fb-acc4157f7d43" containerID="705580916d7d736910f35c909b64a4ea500c151dcc81e7af94f7dc6ebb96efa7" exitCode=0 Jan 28 15:33:32 crc kubenswrapper[4656]: I0128 15:33:32.866598 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerDied","Data":"705580916d7d736910f35c909b64a4ea500c151dcc81e7af94f7dc6ebb96efa7"} Jan 28 15:33:36 crc kubenswrapper[4656]: I0128 15:33:36.894914 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"988a794031b8f6e722d0f71bc04ac75b3bb4a225cca7ba1c33dd15fb9f327082"} Jan 28 15:33:36 crc kubenswrapper[4656]: I0128 15:33:36.895570 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"2f63e67d64f51292396382dbd358736caf1f8d965cb05da30c4b6427a143c269"} Jan 28 15:33:37 crc kubenswrapper[4656]: I0128 15:33:37.917960 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"2301a34435ae57f2c956f3a22e00ab2f916f51af1fae0ad0025e198d9bab9ce1"} Jan 28 15:33:37 crc kubenswrapper[4656]: I0128 15:33:37.919256 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"c91c43961ecf120b0ae37bdf11bf68ae5d224cdda8ace05a93bf5b1048100296"} Jan 28 15:33:37 crc kubenswrapper[4656]: I0128 15:33:37.921282 4656 generic.go:334] "Generic (PLEG): container finished" podID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerID="9c21dc5b63458bfdf67af2165e2644c35c783d3c4eb17c296ab48b7688983ee9" exitCode=0 Jan 28 15:33:37 crc kubenswrapper[4656]: I0128 15:33:37.921309 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerDied","Data":"9c21dc5b63458bfdf67af2165e2644c35c783d3c4eb17c296ab48b7688983ee9"} Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.930844 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"672785aaeb3dcfc1a7044d7ae574ee59501905f8f0ccf92b43011e15c73fb6f5"} Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.931101 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z94g8" event={"ID":"e31100cc-2c8a-4682-b6fb-acc4157f7d43","Type":"ContainerStarted","Data":"e82ca68b3f1f13b5f5d16e11b29f1cd3f813aa8af6403ffcc52565d15adbcbdf"} Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.931705 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.961549 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-k4qr2" Jan 28 15:33:38 crc kubenswrapper[4656]: I0128 15:33:38.969989 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z94g8" podStartSLOduration=10.586425115 podStartE2EDuration="22.969964108s" podCreationTimestamp="2026-01-28 15:33:16 +0000 UTC" firstStartedPulling="2026-01-28 15:33:17.496490909 +0000 UTC m=+888.004661723" lastFinishedPulling="2026-01-28 15:33:29.880029912 +0000 UTC m=+900.388200716" observedRunningTime="2026-01-28 15:33:38.964801619 +0000 UTC m=+909.472972433" watchObservedRunningTime="2026-01-28 15:33:38.969964108 +0000 UTC m=+909.478134912" Jan 28 15:33:39 crc kubenswrapper[4656]: I0128 15:33:39.941700 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerStarted","Data":"7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5"} Jan 28 15:33:39 crc kubenswrapper[4656]: I0128 15:33:39.970041 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d6s8p" podStartSLOduration=3.930896722 podStartE2EDuration="9.970015561s" podCreationTimestamp="2026-01-28 15:33:30 +0000 UTC" firstStartedPulling="2026-01-28 15:33:32.864660051 +0000 UTC m=+903.372830855" lastFinishedPulling="2026-01-28 15:33:38.90377889 +0000 UTC m=+909.411949694" observedRunningTime="2026-01-28 15:33:39.963704959 +0000 UTC m=+910.471875763" watchObservedRunningTime="2026-01-28 15:33:39.970015561 +0000 UTC m=+910.478186365" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.084844 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.085376 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.879724 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.881011 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.884371 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.884546 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9nnjn" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.897644 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 15:33:41 crc kubenswrapper[4656]: I0128 15:33:41.914336 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.079980 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") pod \"openstack-operator-index-hc9z5\" (UID: \"586a9839-5486-44ed-bccd-7219927f1582\") " pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.181414 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") pod \"openstack-operator-index-hc9z5\" (UID: \"586a9839-5486-44ed-bccd-7219927f1582\") " pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.196800 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d6s8p" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" probeResult="failure" output=< Jan 28 15:33:42 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:33:42 crc kubenswrapper[4656]: > Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.212930 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") pod \"openstack-operator-index-hc9z5\" (UID: \"586a9839-5486-44ed-bccd-7219927f1582\") " pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.324971 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.378929 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.501006 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.802209 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:42 crc kubenswrapper[4656]: I0128 15:33:42.979896 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hc9z5" event={"ID":"586a9839-5486-44ed-bccd-7219927f1582","Type":"ContainerStarted","Data":"1d55a2c91351df6542144ca7b0d478b0ef9b7d981b9e83ff9b7d814420f07605"} Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.243340 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.859344 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tknhq"] Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.863046 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.871851 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tknhq"] Jan 28 15:33:45 crc kubenswrapper[4656]: I0128 15:33:45.994027 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfs75\" (UniqueName: \"kubernetes.io/projected/c65965be-4267-4c92-a9b1-046d85299b2c-kube-api-access-dfs75\") pod \"openstack-operator-index-tknhq\" (UID: \"c65965be-4267-4c92-a9b1-046d85299b2c\") " pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:46 crc kubenswrapper[4656]: I0128 15:33:46.095272 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfs75\" (UniqueName: \"kubernetes.io/projected/c65965be-4267-4c92-a9b1-046d85299b2c-kube-api-access-dfs75\") pod \"openstack-operator-index-tknhq\" (UID: \"c65965be-4267-4c92-a9b1-046d85299b2c\") " pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:46 crc kubenswrapper[4656]: I0128 15:33:46.120758 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfs75\" (UniqueName: \"kubernetes.io/projected/c65965be-4267-4c92-a9b1-046d85299b2c-kube-api-access-dfs75\") pod \"openstack-operator-index-tknhq\" (UID: \"c65965be-4267-4c92-a9b1-046d85299b2c\") " pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:46 crc kubenswrapper[4656]: I0128 15:33:46.202654 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:47 crc kubenswrapper[4656]: I0128 15:33:47.234746 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tknhq"] Jan 28 15:33:47 crc kubenswrapper[4656]: I0128 15:33:47.330384 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z94g8" Jan 28 15:33:47 crc kubenswrapper[4656]: I0128 15:33:47.369580 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nzvcq" Jan 28 15:33:48 crc kubenswrapper[4656]: W0128 15:33:48.107419 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc65965be_4267_4c92_a9b1_046d85299b2c.slice/crio-9257c587405d96ebfdd173d424e3f50644adf64d1899d83653b4f2bc04bd0c30 WatchSource:0}: Error finding container 9257c587405d96ebfdd173d424e3f50644adf64d1899d83653b4f2bc04bd0c30: Status 404 returned error can't find the container with id 9257c587405d96ebfdd173d424e3f50644adf64d1899d83653b4f2bc04bd0c30 Jan 28 15:33:49 crc kubenswrapper[4656]: I0128 15:33:49.022146 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tknhq" event={"ID":"c65965be-4267-4c92-a9b1-046d85299b2c","Type":"ContainerStarted","Data":"9257c587405d96ebfdd173d424e3f50644adf64d1899d83653b4f2bc04bd0c30"} Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.030078 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tknhq" event={"ID":"c65965be-4267-4c92-a9b1-046d85299b2c","Type":"ContainerStarted","Data":"c4f5bb1cd01c83029fc1f496c11478e78ba6b1ba94a3b5cb9790a1faee4d6ddc"} Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.031533 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hc9z5" event={"ID":"586a9839-5486-44ed-bccd-7219927f1582","Type":"ContainerStarted","Data":"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09"} Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.031679 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hc9z5" podUID="586a9839-5486-44ed-bccd-7219927f1582" containerName="registry-server" containerID="cri-o://32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" gracePeriod=2 Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.074359 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tknhq" podStartSLOduration=4.27603481 podStartE2EDuration="5.074332009s" podCreationTimestamp="2026-01-28 15:33:45 +0000 UTC" firstStartedPulling="2026-01-28 15:33:48.109317893 +0000 UTC m=+918.617488707" lastFinishedPulling="2026-01-28 15:33:48.907615112 +0000 UTC m=+919.415785906" observedRunningTime="2026-01-28 15:33:50.04766199 +0000 UTC m=+920.555832794" watchObservedRunningTime="2026-01-28 15:33:50.074332009 +0000 UTC m=+920.582502813" Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.415770 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.557821 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") pod \"586a9839-5486-44ed-bccd-7219927f1582\" (UID: \"586a9839-5486-44ed-bccd-7219927f1582\") " Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.563930 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc" (OuterVolumeSpecName: "kube-api-access-mg7wc") pod "586a9839-5486-44ed-bccd-7219927f1582" (UID: "586a9839-5486-44ed-bccd-7219927f1582"). InnerVolumeSpecName "kube-api-access-mg7wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:50 crc kubenswrapper[4656]: I0128 15:33:50.658929 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg7wc\" (UniqueName: \"kubernetes.io/projected/586a9839-5486-44ed-bccd-7219927f1582-kube-api-access-mg7wc\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.039839 4656 generic.go:334] "Generic (PLEG): container finished" podID="586a9839-5486-44ed-bccd-7219927f1582" containerID="32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" exitCode=0 Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.039921 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hc9z5" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.039942 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hc9z5" event={"ID":"586a9839-5486-44ed-bccd-7219927f1582","Type":"ContainerDied","Data":"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09"} Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.040011 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hc9z5" event={"ID":"586a9839-5486-44ed-bccd-7219927f1582","Type":"ContainerDied","Data":"1d55a2c91351df6542144ca7b0d478b0ef9b7d981b9e83ff9b7d814420f07605"} Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.040058 4656 scope.go:117] "RemoveContainer" containerID="32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.063567 4656 scope.go:117] "RemoveContainer" containerID="32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" Jan 28 15:33:51 crc kubenswrapper[4656]: E0128 15:33:51.065620 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09\": container with ID starting with 32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09 not found: ID does not exist" containerID="32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.065665 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09"} err="failed to get container status \"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09\": rpc error: code = NotFound desc = could not find container \"32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09\": container with ID starting with 32541dbdfb41540d20dd33d0f51ce8353c8fe9d33d83a33a7b33f22b811fdf09 not found: ID does not exist" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.072058 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.076899 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-hc9z5"] Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.127432 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.177815 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586a9839-5486-44ed-bccd-7219927f1582" path="/var/lib/kubelet/pods/586a9839-5486-44ed-bccd-7219927f1582/volumes" Jan 28 15:33:51 crc kubenswrapper[4656]: I0128 15:33:51.178372 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:54 crc kubenswrapper[4656]: I0128 15:33:54.241900 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:54 crc kubenswrapper[4656]: I0128 15:33:54.242596 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d6s8p" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" containerID="cri-o://7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5" gracePeriod=2 Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.097093 4656 generic.go:334] "Generic (PLEG): container finished" podID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerID="7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5" exitCode=0 Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.097411 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerDied","Data":"7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5"} Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.401411 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.496727 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") pod \"aea8db27-38e0-4ac8-8835-b9d701c8f230\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.496865 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") pod \"aea8db27-38e0-4ac8-8835-b9d701c8f230\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.496899 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") pod \"aea8db27-38e0-4ac8-8835-b9d701c8f230\" (UID: \"aea8db27-38e0-4ac8-8835-b9d701c8f230\") " Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.497981 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities" (OuterVolumeSpecName: "utilities") pod "aea8db27-38e0-4ac8-8835-b9d701c8f230" (UID: "aea8db27-38e0-4ac8-8835-b9d701c8f230"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.505269 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7" (OuterVolumeSpecName: "kube-api-access-b6tz7") pod "aea8db27-38e0-4ac8-8835-b9d701c8f230" (UID: "aea8db27-38e0-4ac8-8835-b9d701c8f230"). InnerVolumeSpecName "kube-api-access-b6tz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.550822 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aea8db27-38e0-4ac8-8835-b9d701c8f230" (UID: "aea8db27-38e0-4ac8-8835-b9d701c8f230"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.597837 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.597885 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6tz7\" (UniqueName: \"kubernetes.io/projected/aea8db27-38e0-4ac8-8835-b9d701c8f230-kube-api-access-b6tz7\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:55 crc kubenswrapper[4656]: I0128 15:33:55.597898 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8db27-38e0-4ac8-8835-b9d701c8f230-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.106395 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6s8p" event={"ID":"aea8db27-38e0-4ac8-8835-b9d701c8f230","Type":"ContainerDied","Data":"c85a20e117717387a489dc5dd61783b4e5fcd7cda3111851f391796dcda146f0"} Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.106523 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6s8p" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.106537 4656 scope.go:117] "RemoveContainer" containerID="7dbc24f34e40013f613a8a2c137595d28f480ccce0e5b63798e70e4ef84cc0e5" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.124772 4656 scope.go:117] "RemoveContainer" containerID="9c21dc5b63458bfdf67af2165e2644c35c783d3c4eb17c296ab48b7688983ee9" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.140524 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.145464 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d6s8p"] Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.150553 4656 scope.go:117] "RemoveContainer" containerID="fe8b322c739039fd04ab424c28e05ffbfdd4af0e21a26d5ce1fb07d9670baa75" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.203593 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.204005 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:56 crc kubenswrapper[4656]: I0128 15:33:56.234831 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:57 crc kubenswrapper[4656]: I0128 15:33:57.138918 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tknhq" Jan 28 15:33:57 crc kubenswrapper[4656]: I0128 15:33:57.181142 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" path="/var/lib/kubelet/pods/aea8db27-38e0-4ac8-8835-b9d701c8f230/volumes" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448057 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:33:59 crc kubenswrapper[4656]: E0128 15:33:59.448843 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="extract-content" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448864 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="extract-content" Jan 28 15:33:59 crc kubenswrapper[4656]: E0128 15:33:59.448886 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586a9839-5486-44ed-bccd-7219927f1582" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448893 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="586a9839-5486-44ed-bccd-7219927f1582" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: E0128 15:33:59.448905 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="extract-utilities" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448913 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="extract-utilities" Jan 28 15:33:59 crc kubenswrapper[4656]: E0128 15:33:59.448931 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.448939 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.449133 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea8db27-38e0-4ac8-8835-b9d701c8f230" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.449155 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="586a9839-5486-44ed-bccd-7219927f1582" containerName="registry-server" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.450130 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.475294 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.478199 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.478296 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.478416 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.579539 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.579616 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.579678 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.580255 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.580330 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.615924 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") pod \"certified-operators-r7xrq\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:33:59 crc kubenswrapper[4656]: I0128 15:33:59.768714 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:00 crc kubenswrapper[4656]: I0128 15:34:00.065936 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:34:00 crc kubenswrapper[4656]: W0128 15:34:00.075728 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10cc0f8e_3b8e_4ce0_9141_fd48481ff622.slice/crio-3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd WatchSource:0}: Error finding container 3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd: Status 404 returned error can't find the container with id 3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd Jan 28 15:34:00 crc kubenswrapper[4656]: I0128 15:34:00.152415 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerStarted","Data":"3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd"} Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.161072 4656 generic.go:334] "Generic (PLEG): container finished" podID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerID="5645c2dd6f454c80b7bbeb869d6c9e929d591ed8e12d98c6d8681e2f82ba32bb" exitCode=0 Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.161434 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerDied","Data":"5645c2dd6f454c80b7bbeb869d6c9e929d591ed8e12d98c6d8681e2f82ba32bb"} Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.693377 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq"] Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.694579 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.707355 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq"] Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.707790 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bgc4w" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.808795 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.808926 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.808966 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.910754 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.910855 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.910910 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.911595 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.911603 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:01 crc kubenswrapper[4656]: I0128 15:34:01.929854 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") pod \"ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:02 crc kubenswrapper[4656]: I0128 15:34:02.019281 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:02 crc kubenswrapper[4656]: I0128 15:34:02.167836 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerStarted","Data":"b9e878f73cd88ccd02aece43db5b72158f4dc09151f8f8581608bf10b91e8667"} Jan 28 15:34:02 crc kubenswrapper[4656]: I0128 15:34:02.340031 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq"] Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.175273 4656 generic.go:334] "Generic (PLEG): container finished" podID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerID="b9e878f73cd88ccd02aece43db5b72158f4dc09151f8f8581608bf10b91e8667" exitCode=0 Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.177584 4656 generic.go:334] "Generic (PLEG): container finished" podID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerID="2c9c6bff4cedaf5dd39afb3e2d6c64d674b622d996bc32d90c2a5a5c6e725e02" exitCode=0 Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.179721 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerDied","Data":"b9e878f73cd88ccd02aece43db5b72158f4dc09151f8f8581608bf10b91e8667"} Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.179752 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerDied","Data":"2c9c6bff4cedaf5dd39afb3e2d6c64d674b622d996bc32d90c2a5a5c6e725e02"} Jan 28 15:34:03 crc kubenswrapper[4656]: I0128 15:34:03.179763 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerStarted","Data":"30e60863f34de76684074956da200a0e1a04f3f9c65027ce9ecc16a1f1c826a1"} Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.193462 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerStarted","Data":"37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8"} Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.197602 4656 generic.go:334] "Generic (PLEG): container finished" podID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerID="4c9dd1da445842fdfec59ae60f4a2121d08dac1104abf88f297c01649350b333" exitCode=0 Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.197641 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerDied","Data":"4c9dd1da445842fdfec59ae60f4a2121d08dac1104abf88f297c01649350b333"} Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.245965 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r7xrq" podStartSLOduration=2.7380914819999997 podStartE2EDuration="5.245939674s" podCreationTimestamp="2026-01-28 15:33:59 +0000 UTC" firstStartedPulling="2026-01-28 15:34:01.163589385 +0000 UTC m=+931.671760189" lastFinishedPulling="2026-01-28 15:34:03.671437577 +0000 UTC m=+934.179608381" observedRunningTime="2026-01-28 15:34:04.221974563 +0000 UTC m=+934.730145367" watchObservedRunningTime="2026-01-28 15:34:04.245939674 +0000 UTC m=+934.754110478" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.664773 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.666593 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.680349 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.858826 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.858958 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.858994 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.960184 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.960527 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.961085 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.960929 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.961475 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.984126 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") pod \"redhat-marketplace-rjtm5\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:04 crc kubenswrapper[4656]: I0128 15:34:04.989665 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:05 crc kubenswrapper[4656]: I0128 15:34:05.216043 4656 generic.go:334] "Generic (PLEG): container finished" podID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerID="c2eef08f3534083715182ac4af984bbef82d830a11d3f94be9ddf07c8db2098f" exitCode=0 Jan 28 15:34:05 crc kubenswrapper[4656]: I0128 15:34:05.216197 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerDied","Data":"c2eef08f3534083715182ac4af984bbef82d830a11d3f94be9ddf07c8db2098f"} Jan 28 15:34:05 crc kubenswrapper[4656]: I0128 15:34:05.277700 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.226323 4656 generic.go:334] "Generic (PLEG): container finished" podID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerID="a38702ba1010780fceeb4e14ba15c9a6be53702dae80ef7db7486122c9680a4c" exitCode=0 Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.227445 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerDied","Data":"a38702ba1010780fceeb4e14ba15c9a6be53702dae80ef7db7486122c9680a4c"} Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.227471 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerStarted","Data":"0a1f1983ebbdfd9470548313b59bc006ed4c1ebef587436ac923240c5763c30c"} Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.465639 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.484226 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") pod \"a11dcdce-e6bc-48a2-b273-3755e5aee495\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.484304 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") pod \"a11dcdce-e6bc-48a2-b273-3755e5aee495\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.484355 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") pod \"a11dcdce-e6bc-48a2-b273-3755e5aee495\" (UID: \"a11dcdce-e6bc-48a2-b273-3755e5aee495\") " Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.485057 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle" (OuterVolumeSpecName: "bundle") pod "a11dcdce-e6bc-48a2-b273-3755e5aee495" (UID: "a11dcdce-e6bc-48a2-b273-3755e5aee495"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.494750 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx" (OuterVolumeSpecName: "kube-api-access-k6fgx") pod "a11dcdce-e6bc-48a2-b273-3755e5aee495" (UID: "a11dcdce-e6bc-48a2-b273-3755e5aee495"). InnerVolumeSpecName "kube-api-access-k6fgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.573575 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util" (OuterVolumeSpecName: "util") pod "a11dcdce-e6bc-48a2-b273-3755e5aee495" (UID: "a11dcdce-e6bc-48a2-b273-3755e5aee495"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.593254 4656 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.593317 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6fgx\" (UniqueName: \"kubernetes.io/projected/a11dcdce-e6bc-48a2-b273-3755e5aee495-kube-api-access-k6fgx\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:06 crc kubenswrapper[4656]: I0128 15:34:06.593330 4656 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a11dcdce-e6bc-48a2-b273-3755e5aee495-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:07 crc kubenswrapper[4656]: I0128 15:34:07.235644 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" event={"ID":"a11dcdce-e6bc-48a2-b273-3755e5aee495","Type":"ContainerDied","Data":"30e60863f34de76684074956da200a0e1a04f3f9c65027ce9ecc16a1f1c826a1"} Jan 28 15:34:07 crc kubenswrapper[4656]: I0128 15:34:07.235700 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq" Jan 28 15:34:07 crc kubenswrapper[4656]: I0128 15:34:07.235915 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e60863f34de76684074956da200a0e1a04f3f9c65027ce9ecc16a1f1c826a1" Jan 28 15:34:08 crc kubenswrapper[4656]: I0128 15:34:08.244941 4656 generic.go:334] "Generic (PLEG): container finished" podID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerID="7ff58517afbd24556b20c54ce145108e3bbe303b3565c3ff9215649e73488d31" exitCode=0 Jan 28 15:34:08 crc kubenswrapper[4656]: I0128 15:34:08.244992 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerDied","Data":"7ff58517afbd24556b20c54ce145108e3bbe303b3565c3ff9215649e73488d31"} Jan 28 15:34:09 crc kubenswrapper[4656]: I0128 15:34:09.769203 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:09 crc kubenswrapper[4656]: I0128 15:34:09.769806 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:09 crc kubenswrapper[4656]: I0128 15:34:09.816535 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:10 crc kubenswrapper[4656]: I0128 15:34:10.260216 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerStarted","Data":"f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd"} Jan 28 15:34:10 crc kubenswrapper[4656]: I0128 15:34:10.315415 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:10 crc kubenswrapper[4656]: I0128 15:34:10.332463 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rjtm5" podStartSLOduration=3.133571562 podStartE2EDuration="6.332438941s" podCreationTimestamp="2026-01-28 15:34:04 +0000 UTC" firstStartedPulling="2026-01-28 15:34:06.229966518 +0000 UTC m=+936.738137332" lastFinishedPulling="2026-01-28 15:34:09.428833907 +0000 UTC m=+939.937004711" observedRunningTime="2026-01-28 15:34:10.282556343 +0000 UTC m=+940.790727137" watchObservedRunningTime="2026-01-28 15:34:10.332438941 +0000 UTC m=+940.840609745" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.570450 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb"] Jan 28 15:34:11 crc kubenswrapper[4656]: E0128 15:34:11.571131 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="util" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571148 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="util" Jan 28 15:34:11 crc kubenswrapper[4656]: E0128 15:34:11.571220 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="extract" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571230 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="extract" Jan 28 15:34:11 crc kubenswrapper[4656]: E0128 15:34:11.571251 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="pull" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571261 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="pull" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571412 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11dcdce-e6bc-48a2-b273-3755e5aee495" containerName="extract" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.571925 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.575389 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-n8d8s" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.609526 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb"] Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.759604 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77cx\" (UniqueName: \"kubernetes.io/projected/ab5fdcdc-7606-4e97-a65a-c98545c1a74a-kube-api-access-n77cx\") pod \"openstack-operator-controller-init-678d9cfb88-c5xvb\" (UID: \"ab5fdcdc-7606-4e97-a65a-c98545c1a74a\") " pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.860622 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n77cx\" (UniqueName: \"kubernetes.io/projected/ab5fdcdc-7606-4e97-a65a-c98545c1a74a-kube-api-access-n77cx\") pod \"openstack-operator-controller-init-678d9cfb88-c5xvb\" (UID: \"ab5fdcdc-7606-4e97-a65a-c98545c1a74a\") " pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.883097 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n77cx\" (UniqueName: \"kubernetes.io/projected/ab5fdcdc-7606-4e97-a65a-c98545c1a74a-kube-api-access-n77cx\") pod \"openstack-operator-controller-init-678d9cfb88-c5xvb\" (UID: \"ab5fdcdc-7606-4e97-a65a-c98545c1a74a\") " pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:11 crc kubenswrapper[4656]: I0128 15:34:11.891294 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:12 crc kubenswrapper[4656]: I0128 15:34:12.387663 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb"] Jan 28 15:34:12 crc kubenswrapper[4656]: W0128 15:34:12.392627 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab5fdcdc_7606_4e97_a65a_c98545c1a74a.slice/crio-99c29b134aa0cb127e32a07f02e2fea2f3641ed8f67a625e143fb99094196b02 WatchSource:0}: Error finding container 99c29b134aa0cb127e32a07f02e2fea2f3641ed8f67a625e143fb99094196b02: Status 404 returned error can't find the container with id 99c29b134aa0cb127e32a07f02e2fea2f3641ed8f67a625e143fb99094196b02 Jan 28 15:34:13 crc kubenswrapper[4656]: I0128 15:34:13.281020 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" event={"ID":"ab5fdcdc-7606-4e97-a65a-c98545c1a74a","Type":"ContainerStarted","Data":"99c29b134aa0cb127e32a07f02e2fea2f3641ed8f67a625e143fb99094196b02"} Jan 28 15:34:14 crc kubenswrapper[4656]: I0128 15:34:14.990370 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:14 crc kubenswrapper[4656]: I0128 15:34:14.990729 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:15 crc kubenswrapper[4656]: I0128 15:34:15.054979 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:15 crc kubenswrapper[4656]: I0128 15:34:15.340852 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:16 crc kubenswrapper[4656]: I0128 15:34:16.443468 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:34:16 crc kubenswrapper[4656]: I0128 15:34:16.443778 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r7xrq" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="registry-server" containerID="cri-o://37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8" gracePeriod=2 Jan 28 15:34:16 crc kubenswrapper[4656]: I0128 15:34:16.638200 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:17 crc kubenswrapper[4656]: I0128 15:34:17.306223 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rjtm5" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="registry-server" containerID="cri-o://f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd" gracePeriod=2 Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.317891 4656 generic.go:334] "Generic (PLEG): container finished" podID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerID="37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8" exitCode=0 Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.318072 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerDied","Data":"37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8"} Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.649969 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.761626 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") pod \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.761717 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") pod \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.761748 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") pod \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\" (UID: \"10cc0f8e-3b8e-4ce0-9141-fd48481ff622\") " Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.780325 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn" (OuterVolumeSpecName: "kube-api-access-cfvnn") pod "10cc0f8e-3b8e-4ce0-9141-fd48481ff622" (UID: "10cc0f8e-3b8e-4ce0-9141-fd48481ff622"). InnerVolumeSpecName "kube-api-access-cfvnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.783283 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities" (OuterVolumeSpecName: "utilities") pod "10cc0f8e-3b8e-4ce0-9141-fd48481ff622" (UID: "10cc0f8e-3b8e-4ce0-9141-fd48481ff622"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.818291 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10cc0f8e-3b8e-4ce0-9141-fd48481ff622" (UID: "10cc0f8e-3b8e-4ce0-9141-fd48481ff622"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.863580 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.863621 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfvnn\" (UniqueName: \"kubernetes.io/projected/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-kube-api-access-cfvnn\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:18 crc kubenswrapper[4656]: I0128 15:34:18.863637 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cc0f8e-3b8e-4ce0-9141-fd48481ff622-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.331239 4656 generic.go:334] "Generic (PLEG): container finished" podID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerID="f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd" exitCode=0 Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.331280 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerDied","Data":"f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd"} Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.334859 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7xrq" event={"ID":"10cc0f8e-3b8e-4ce0-9141-fd48481ff622","Type":"ContainerDied","Data":"3e359e795f61a217cdcafd8cb2067b82bb9505f246593f62c063892da4406bbd"} Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.334906 4656 scope.go:117] "RemoveContainer" containerID="37b3ed1147c4ee45df4ae55dc2fda63b24c7adf4b374f9696bf3214bfcc5b1a8" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.335049 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7xrq" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.385007 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.398680 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.404034 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r7xrq"] Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.475530 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") pod \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.475611 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") pod \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.475669 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") pod \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\" (UID: \"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90\") " Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.476592 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities" (OuterVolumeSpecName: "utilities") pod "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" (UID: "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.483401 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns" (OuterVolumeSpecName: "kube-api-access-qz7ns") pod "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" (UID: "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90"). InnerVolumeSpecName "kube-api-access-qz7ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.500254 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" (UID: "6be28c80-fbc8-4f9a-b5df-247d3ad5fa90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.577094 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz7ns\" (UniqueName: \"kubernetes.io/projected/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-kube-api-access-qz7ns\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.577129 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:19 crc kubenswrapper[4656]: I0128 15:34:19.577138 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:20 crc kubenswrapper[4656]: I0128 15:34:20.349639 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjtm5" event={"ID":"6be28c80-fbc8-4f9a-b5df-247d3ad5fa90","Type":"ContainerDied","Data":"0a1f1983ebbdfd9470548313b59bc006ed4c1ebef587436ac923240c5763c30c"} Jan 28 15:34:20 crc kubenswrapper[4656]: I0128 15:34:20.349761 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjtm5" Jan 28 15:34:20 crc kubenswrapper[4656]: I0128 15:34:20.380976 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:20 crc kubenswrapper[4656]: I0128 15:34:20.386550 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjtm5"] Jan 28 15:34:21 crc kubenswrapper[4656]: I0128 15:34:21.574354 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" path="/var/lib/kubelet/pods/10cc0f8e-3b8e-4ce0-9141-fd48481ff622/volumes" Jan 28 15:34:21 crc kubenswrapper[4656]: I0128 15:34:21.575432 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" path="/var/lib/kubelet/pods/6be28c80-fbc8-4f9a-b5df-247d3ad5fa90/volumes" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.054220 4656 scope.go:117] "RemoveContainer" containerID="b9e878f73cd88ccd02aece43db5b72158f4dc09151f8f8581608bf10b91e8667" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.706115 4656 scope.go:117] "RemoveContainer" containerID="5645c2dd6f454c80b7bbeb869d6c9e929d591ed8e12d98c6d8681e2f82ba32bb" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.728271 4656 scope.go:117] "RemoveContainer" containerID="f3e912696e56567cc3ca1a3ddbe7eb9c088b3d07cad9fd5164dc26cbe93c0dcd" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.752416 4656 scope.go:117] "RemoveContainer" containerID="7ff58517afbd24556b20c54ce145108e3bbe303b3565c3ff9215649e73488d31" Jan 28 15:34:24 crc kubenswrapper[4656]: I0128 15:34:24.771455 4656 scope.go:117] "RemoveContainer" containerID="a38702ba1010780fceeb4e14ba15c9a6be53702dae80ef7db7486122c9680a4c" Jan 28 15:34:26 crc kubenswrapper[4656]: I0128 15:34:26.601888 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" event={"ID":"ab5fdcdc-7606-4e97-a65a-c98545c1a74a","Type":"ContainerStarted","Data":"12301810d801383635a6729a0fea2084d71eed671a8e76f6f559a4275845dd3f"} Jan 28 15:34:26 crc kubenswrapper[4656]: I0128 15:34:26.602522 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:26 crc kubenswrapper[4656]: I0128 15:34:26.632675 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" podStartSLOduration=2.340251692 podStartE2EDuration="15.632654528s" podCreationTimestamp="2026-01-28 15:34:11 +0000 UTC" firstStartedPulling="2026-01-28 15:34:12.396862071 +0000 UTC m=+942.905032865" lastFinishedPulling="2026-01-28 15:34:25.689264897 +0000 UTC m=+956.197435701" observedRunningTime="2026-01-28 15:34:26.631123454 +0000 UTC m=+957.139294248" watchObservedRunningTime="2026-01-28 15:34:26.632654528 +0000 UTC m=+957.140825332" Jan 28 15:34:31 crc kubenswrapper[4656]: I0128 15:34:31.898304 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-678d9cfb88-c5xvb" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.338720 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f"] Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339772 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339797 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339828 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339837 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339848 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="extract-content" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339856 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="extract-content" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339917 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="extract-content" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339926 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="extract-content" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339941 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="extract-utilities" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339949 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="extract-utilities" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.339962 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="extract-utilities" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.339969 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="extract-utilities" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.340150 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be28c80-fbc8-4f9a-b5df-247d3ad5fa90" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.340200 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="10cc0f8e-3b8e-4ce0-9141-fd48481ff622" containerName="registry-server" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.340911 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.345105 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ld9jz" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.352974 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.354005 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.358348 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-t87h2" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.371571 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.379505 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.397287 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.398117 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.410638 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8rqbd" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.437905 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.466629 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.467565 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.470921 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2txf\" (UniqueName: \"kubernetes.io/projected/0cf0a4ad-85dd-47df-9307-e469f075a098-kube-api-access-c2txf\") pod \"cinder-operator-controller-manager-f6487bd57-jwv7f\" (UID: \"0cf0a4ad-85dd-47df-9307-e469f075a098\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.470985 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhlmp\" (UniqueName: \"kubernetes.io/projected/6ce4cdbc-3227-4679-8da9-9fd537996bd7-kube-api-access-jhlmp\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-hd57q\" (UID: \"6ce4cdbc-3227-4679-8da9-9fd537996bd7\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.471698 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hc6jb" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.505236 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.506109 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.509315 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.510913 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kddk9" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.548613 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.557453 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.558436 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572420 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bk555" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572626 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhlmp\" (UniqueName: \"kubernetes.io/projected/6ce4cdbc-3227-4679-8da9-9fd537996bd7-kube-api-access-jhlmp\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-hd57q\" (UID: \"6ce4cdbc-3227-4679-8da9-9fd537996bd7\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572694 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4q4\" (UniqueName: \"kubernetes.io/projected/cfeab083-1268-47aa-938e-bd91036755de-kube-api-access-rn4q4\") pod \"designate-operator-controller-manager-66dfbd6f5d-r8cjw\" (UID: \"cfeab083-1268-47aa-938e-bd91036755de\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572739 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pck4m\" (UniqueName: \"kubernetes.io/projected/113ba11f-aeba-4710-b5f6-0991e9766d45-kube-api-access-pck4m\") pod \"glance-operator-controller-manager-6db5dbd896-cfpjq\" (UID: \"113ba11f-aeba-4710-b5f6-0991e9766d45\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.572815 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2txf\" (UniqueName: \"kubernetes.io/projected/0cf0a4ad-85dd-47df-9307-e469f075a098-kube-api-access-c2txf\") pod \"cinder-operator-controller-manager-f6487bd57-jwv7f\" (UID: \"0cf0a4ad-85dd-47df-9307-e469f075a098\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.585088 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.586131 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.595799 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qc7x2" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.595997 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.620364 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.638380 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.646586 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2txf\" (UniqueName: \"kubernetes.io/projected/0cf0a4ad-85dd-47df-9307-e469f075a098-kube-api-access-c2txf\") pod \"cinder-operator-controller-manager-f6487bd57-jwv7f\" (UID: \"0cf0a4ad-85dd-47df-9307-e469f075a098\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.660640 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.671926 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhlmp\" (UniqueName: \"kubernetes.io/projected/6ce4cdbc-3227-4679-8da9-9fd537996bd7-kube-api-access-jhlmp\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-hd57q\" (UID: \"6ce4cdbc-3227-4679-8da9-9fd537996bd7\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.672295 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.673945 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4q4\" (UniqueName: \"kubernetes.io/projected/cfeab083-1268-47aa-938e-bd91036755de-kube-api-access-rn4q4\") pod \"designate-operator-controller-manager-66dfbd6f5d-r8cjw\" (UID: \"cfeab083-1268-47aa-938e-bd91036755de\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.674031 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pck4m\" (UniqueName: \"kubernetes.io/projected/113ba11f-aeba-4710-b5f6-0991e9766d45-kube-api-access-pck4m\") pod \"glance-operator-controller-manager-6db5dbd896-cfpjq\" (UID: \"113ba11f-aeba-4710-b5f6-0991e9766d45\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.674075 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz54t\" (UniqueName: \"kubernetes.io/projected/45be18b4-f249-4c09-8875-9959686d7f8f-kube-api-access-jz54t\") pod \"heat-operator-controller-manager-587c6bfdcf-xjnqt\" (UID: \"45be18b4-f249-4c09-8875-9959686d7f8f\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.674104 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l2vx\" (UniqueName: \"kubernetes.io/projected/1bfa2d1e-9ab0-478a-a19d-d031a1a8a312-kube-api-access-6l2vx\") pod \"horizon-operator-controller-manager-5fb775575f-9q5lw\" (UID: \"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.691407 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.692569 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.698565 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-w8b4l" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.735091 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pck4m\" (UniqueName: \"kubernetes.io/projected/113ba11f-aeba-4710-b5f6-0991e9766d45-kube-api-access-pck4m\") pod \"glance-operator-controller-manager-6db5dbd896-cfpjq\" (UID: \"113ba11f-aeba-4710-b5f6-0991e9766d45\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.756285 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4q4\" (UniqueName: \"kubernetes.io/projected/cfeab083-1268-47aa-938e-bd91036755de-kube-api-access-rn4q4\") pod \"designate-operator-controller-manager-66dfbd6f5d-r8cjw\" (UID: \"cfeab083-1268-47aa-938e-bd91036755de\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.774959 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz54t\" (UniqueName: \"kubernetes.io/projected/45be18b4-f249-4c09-8875-9959686d7f8f-kube-api-access-jz54t\") pod \"heat-operator-controller-manager-587c6bfdcf-xjnqt\" (UID: \"45be18b4-f249-4c09-8875-9959686d7f8f\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.775022 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l2vx\" (UniqueName: \"kubernetes.io/projected/1bfa2d1e-9ab0-478a-a19d-d031a1a8a312-kube-api-access-6l2vx\") pod \"horizon-operator-controller-manager-5fb775575f-9q5lw\" (UID: \"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.775057 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbm4n\" (UniqueName: \"kubernetes.io/projected/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-kube-api-access-vbm4n\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.775122 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.775548 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.776354 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.795215 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.795675 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.832349 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vp774" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.853653 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l2vx\" (UniqueName: \"kubernetes.io/projected/1bfa2d1e-9ab0-478a-a19d-d031a1a8a312-kube-api-access-6l2vx\") pod \"horizon-operator-controller-manager-5fb775575f-9q5lw\" (UID: \"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.853743 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.854592 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz54t\" (UniqueName: \"kubernetes.io/projected/45be18b4-f249-4c09-8875-9959686d7f8f-kube-api-access-jz54t\") pod \"heat-operator-controller-manager-587c6bfdcf-xjnqt\" (UID: \"45be18b4-f249-4c09-8875-9959686d7f8f\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.875572 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-7kctj"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.876624 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.878907 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkhq7\" (UniqueName: \"kubernetes.io/projected/ae47e69a-49f4-4b1a-8d68-068b5e99f22a-kube-api-access-rkhq7\") pod \"ironic-operator-controller-manager-958664b5-m9jtk\" (UID: \"ae47e69a-49f4-4b1a-8d68-068b5e99f22a\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.878978 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.879070 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbm4n\" (UniqueName: \"kubernetes.io/projected/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-kube-api-access-vbm4n\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.879110 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbknf\" (UniqueName: \"kubernetes.io/projected/a5bdaf78-b590-429f-bc9b-46c67a369456-kube-api-access-mbknf\") pod \"keystone-operator-controller-manager-7b84b46695-86ht2\" (UID: \"a5bdaf78-b590-429f-bc9b-46c67a369456\") " pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.879358 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:50 crc kubenswrapper[4656]: E0128 15:34:50.879494 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:34:51.379444491 +0000 UTC m=+981.887615295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.883317 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.884447 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.890688 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.893950 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vjs7q" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.894261 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-czpv6" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.894655 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.908349 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj"] Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.909440 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbm4n\" (UniqueName: \"kubernetes.io/projected/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-kube-api-access-vbm4n\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.909483 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:50 crc kubenswrapper[4656]: I0128 15:34:50.915695 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gg2dr" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.020425 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkhq7\" (UniqueName: \"kubernetes.io/projected/ae47e69a-49f4-4b1a-8d68-068b5e99f22a-kube-api-access-rkhq7\") pod \"ironic-operator-controller-manager-958664b5-m9jtk\" (UID: \"ae47e69a-49f4-4b1a-8d68-068b5e99f22a\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.039244 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.020613 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnlpq\" (UniqueName: \"kubernetes.io/projected/0a83428f-312c-4590-beb3-8da4994c8951-kube-api-access-dnlpq\") pod \"manila-operator-controller-manager-765668569f-7kctj\" (UID: \"0a83428f-312c-4590-beb3-8da4994c8951\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.045803 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kng9p\" (UniqueName: \"kubernetes.io/projected/9277e421-df3a-49a2-81cc-86d0f7c65809-kube-api-access-kng9p\") pod \"mariadb-operator-controller-manager-67bf948998-p92zm\" (UID: \"9277e421-df3a-49a2-81cc-86d0f7c65809\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.046038 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbknf\" (UniqueName: \"kubernetes.io/projected/a5bdaf78-b590-429f-bc9b-46c67a369456-kube-api-access-mbknf\") pod \"keystone-operator-controller-manager-7b84b46695-86ht2\" (UID: \"a5bdaf78-b590-429f-bc9b-46c67a369456\") " pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.066595 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-7kctj"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.127129 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.127372 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbknf\" (UniqueName: \"kubernetes.io/projected/a5bdaf78-b590-429f-bc9b-46c67a369456-kube-api-access-mbknf\") pod \"keystone-operator-controller-manager-7b84b46695-86ht2\" (UID: \"a5bdaf78-b590-429f-bc9b-46c67a369456\") " pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.129604 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.151654 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkhq7\" (UniqueName: \"kubernetes.io/projected/ae47e69a-49f4-4b1a-8d68-068b5e99f22a-kube-api-access-rkhq7\") pod \"ironic-operator-controller-manager-958664b5-m9jtk\" (UID: \"ae47e69a-49f4-4b1a-8d68-068b5e99f22a\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.151745 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155299 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnlpq\" (UniqueName: \"kubernetes.io/projected/0a83428f-312c-4590-beb3-8da4994c8951-kube-api-access-dnlpq\") pod \"manila-operator-controller-manager-765668569f-7kctj\" (UID: \"0a83428f-312c-4590-beb3-8da4994c8951\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155365 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kng9p\" (UniqueName: \"kubernetes.io/projected/9277e421-df3a-49a2-81cc-86d0f7c65809-kube-api-access-kng9p\") pod \"mariadb-operator-controller-manager-67bf948998-p92zm\" (UID: \"9277e421-df3a-49a2-81cc-86d0f7c65809\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155399 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155468 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zvdk\" (UniqueName: \"kubernetes.io/projected/9954b0be-71f8-430b-a61f-28a95404c0f7-kube-api-access-5zvdk\") pod \"neutron-operator-controller-manager-694c5bfc85-rjfbj\" (UID: \"9954b0be-71f8-430b-a61f-28a95404c0f7\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.155804 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.161717 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.177407 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2qkmb" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.221060 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kng9p\" (UniqueName: \"kubernetes.io/projected/9277e421-df3a-49a2-81cc-86d0f7c65809-kube-api-access-kng9p\") pod \"mariadb-operator-controller-manager-67bf948998-p92zm\" (UID: \"9277e421-df3a-49a2-81cc-86d0f7c65809\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.228870 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnlpq\" (UniqueName: \"kubernetes.io/projected/0a83428f-312c-4590-beb3-8da4994c8951-kube-api-access-dnlpq\") pod \"manila-operator-controller-manager-765668569f-7kctj\" (UID: \"0a83428f-312c-4590-beb3-8da4994c8951\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.261093 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrvt\" (UniqueName: \"kubernetes.io/projected/f37006c8-da19-4d17-a6d5-f4b075f2220f-kube-api-access-wfrvt\") pod \"nova-operator-controller-manager-ddcbfd695-gqr2d\" (UID: \"f37006c8-da19-4d17-a6d5-f4b075f2220f\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.261253 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zvdk\" (UniqueName: \"kubernetes.io/projected/9954b0be-71f8-430b-a61f-28a95404c0f7-kube-api-access-5zvdk\") pod \"neutron-operator-controller-manager-694c5bfc85-rjfbj\" (UID: \"9954b0be-71f8-430b-a61f-28a95404c0f7\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.261905 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.295635 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.321835 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.321886 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.322341 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.332216 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.342138 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zvdk\" (UniqueName: \"kubernetes.io/projected/9954b0be-71f8-430b-a61f-28a95404c0f7-kube-api-access-5zvdk\") pod \"neutron-operator-controller-manager-694c5bfc85-rjfbj\" (UID: \"9954b0be-71f8-430b-a61f-28a95404c0f7\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.344791 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7r675" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.345139 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ts2hx" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.356306 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.366063 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfrvt\" (UniqueName: \"kubernetes.io/projected/f37006c8-da19-4d17-a6d5-f4b075f2220f-kube-api-access-wfrvt\") pod \"nova-operator-controller-manager-ddcbfd695-gqr2d\" (UID: \"f37006c8-da19-4d17-a6d5-f4b075f2220f\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.390527 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.422520 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.423453 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfrvt\" (UniqueName: \"kubernetes.io/projected/f37006c8-da19-4d17-a6d5-f4b075f2220f-kube-api-access-wfrvt\") pod \"nova-operator-controller-manager-ddcbfd695-gqr2d\" (UID: \"f37006c8-da19-4d17-a6d5-f4b075f2220f\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.429082 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.430585 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.457845 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.458904 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.459553 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.459917 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wvlbk" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.467862 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqzj\" (UniqueName: \"kubernetes.io/projected/50db0152-72c0-4fc3-9cd5-6b2c01127341-kube-api-access-hcqzj\") pod \"octavia-operator-controller-manager-5c765b4558-wjspj\" (UID: \"50db0152-72c0-4fc3-9cd5-6b2c01127341\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.467972 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndtpt\" (UniqueName: \"kubernetes.io/projected/132d53b6-84ec-44d6-8f8f-762e9595919e-kube-api-access-ndtpt\") pod \"ovn-operator-controller-manager-788c46999f-rmvr2\" (UID: \"132d53b6-84ec-44d6-8f8f-762e9595919e\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.468006 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:51 crc kubenswrapper[4656]: E0128 15:34:51.468149 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:51 crc kubenswrapper[4656]: E0128 15:34:51.468225 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:34:52.468206241 +0000 UTC m=+982.976377045 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.474225 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-r428b" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.479352 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.495742 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.499213 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-bxscm" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.506977 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.507823 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.524424 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.526545 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4bg4d" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.526874 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.536869 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.558760 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.568829 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcqzj\" (UniqueName: \"kubernetes.io/projected/50db0152-72c0-4fc3-9cd5-6b2c01127341-kube-api-access-hcqzj\") pod \"octavia-operator-controller-manager-5c765b4558-wjspj\" (UID: \"50db0152-72c0-4fc3-9cd5-6b2c01127341\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.568920 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6tv9\" (UniqueName: \"kubernetes.io/projected/92d1569e-5733-4779-b9fb-7feae2ea9317-kube-api-access-m6tv9\") pod \"placement-operator-controller-manager-5b964cf4cd-brxps\" (UID: \"92d1569e-5733-4779-b9fb-7feae2ea9317\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.568957 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmpqc\" (UniqueName: \"kubernetes.io/projected/5888a906-8758-4179-a30f-c2244ec46072-kube-api-access-qmpqc\") pod \"telemetry-operator-controller-manager-6d69b9c5db-nmjz8\" (UID: \"5888a906-8758-4179-a30f-c2244ec46072\") " pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.568990 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rclnj\" (UniqueName: \"kubernetes.io/projected/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-kube-api-access-rclnj\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.569030 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.569062 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndtpt\" (UniqueName: \"kubernetes.io/projected/132d53b6-84ec-44d6-8f8f-762e9595919e-kube-api-access-ndtpt\") pod \"ovn-operator-controller-manager-788c46999f-rmvr2\" (UID: \"132d53b6-84ec-44d6-8f8f-762e9595919e\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.569095 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mm9\" (UniqueName: \"kubernetes.io/projected/e97e04fa-1b66-4373-b31f-12089f1f5b2b-kube-api-access-p5mm9\") pod \"swift-operator-controller-manager-68fc8c869-9q9vg\" (UID: \"e97e04fa-1b66-4373-b31f-12089f1f5b2b\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.643858 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcqzj\" (UniqueName: \"kubernetes.io/projected/50db0152-72c0-4fc3-9cd5-6b2c01127341-kube-api-access-hcqzj\") pod \"octavia-operator-controller-manager-5c765b4558-wjspj\" (UID: \"50db0152-72c0-4fc3-9cd5-6b2c01127341\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.647654 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndtpt\" (UniqueName: \"kubernetes.io/projected/132d53b6-84ec-44d6-8f8f-762e9595919e-kube-api-access-ndtpt\") pod \"ovn-operator-controller-manager-788c46999f-rmvr2\" (UID: \"132d53b6-84ec-44d6-8f8f-762e9595919e\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.651191 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.655971 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.663155 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.669996 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6tv9\" (UniqueName: \"kubernetes.io/projected/92d1569e-5733-4779-b9fb-7feae2ea9317-kube-api-access-m6tv9\") pod \"placement-operator-controller-manager-5b964cf4cd-brxps\" (UID: \"92d1569e-5733-4779-b9fb-7feae2ea9317\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.670057 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmpqc\" (UniqueName: \"kubernetes.io/projected/5888a906-8758-4179-a30f-c2244ec46072-kube-api-access-qmpqc\") pod \"telemetry-operator-controller-manager-6d69b9c5db-nmjz8\" (UID: \"5888a906-8758-4179-a30f-c2244ec46072\") " pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.670097 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rclnj\" (UniqueName: \"kubernetes.io/projected/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-kube-api-access-rclnj\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.670157 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.670246 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mm9\" (UniqueName: \"kubernetes.io/projected/e97e04fa-1b66-4373-b31f-12089f1f5b2b-kube-api-access-p5mm9\") pod \"swift-operator-controller-manager-68fc8c869-9q9vg\" (UID: \"e97e04fa-1b66-4373-b31f-12089f1f5b2b\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:51 crc kubenswrapper[4656]: E0128 15:34:51.671200 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:51 crc kubenswrapper[4656]: E0128 15:34:51.671261 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:52.171243843 +0000 UTC m=+982.679414647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.671430 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.672447 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.682402 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fzdw5" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.690244 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.694535 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.703332 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.704825 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.710855 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vvjhc" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.712333 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5mm9\" (UniqueName: \"kubernetes.io/projected/e97e04fa-1b66-4373-b31f-12089f1f5b2b-kube-api-access-p5mm9\") pod \"swift-operator-controller-manager-68fc8c869-9q9vg\" (UID: \"e97e04fa-1b66-4373-b31f-12089f1f5b2b\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.716285 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.724024 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmpqc\" (UniqueName: \"kubernetes.io/projected/5888a906-8758-4179-a30f-c2244ec46072-kube-api-access-qmpqc\") pod \"telemetry-operator-controller-manager-6d69b9c5db-nmjz8\" (UID: \"5888a906-8758-4179-a30f-c2244ec46072\") " pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.731819 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rclnj\" (UniqueName: \"kubernetes.io/projected/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-kube-api-access-rclnj\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.735356 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6tv9\" (UniqueName: \"kubernetes.io/projected/92d1569e-5733-4779-b9fb-7feae2ea9317-kube-api-access-m6tv9\") pod \"placement-operator-controller-manager-5b964cf4cd-brxps\" (UID: \"92d1569e-5733-4779-b9fb-7feae2ea9317\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.771873 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgcdr\" (UniqueName: \"kubernetes.io/projected/36524b9c-daa2-46d2-a732-b0964bb08873-kube-api-access-rgcdr\") pod \"watcher-operator-controller-manager-767b8bc766-xlrqs\" (UID: \"36524b9c-daa2-46d2-a732-b0964bb08873\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.771928 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58j57\" (UniqueName: \"kubernetes.io/projected/d903ea5b-f13e-43d5-b65b-44093c70ddee-kube-api-access-58j57\") pod \"test-operator-controller-manager-56f8bfcd9f-bxkwv\" (UID: \"d903ea5b-f13e-43d5-b65b-44093c70ddee\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.781341 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.782299 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.794230 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.794443 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2qnf5" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.795068 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.809101 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.837294 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966246 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966382 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgcdr\" (UniqueName: \"kubernetes.io/projected/36524b9c-daa2-46d2-a732-b0964bb08873-kube-api-access-rgcdr\") pod \"watcher-operator-controller-manager-767b8bc766-xlrqs\" (UID: \"36524b9c-daa2-46d2-a732-b0964bb08873\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966485 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58j57\" (UniqueName: \"kubernetes.io/projected/d903ea5b-f13e-43d5-b65b-44093c70ddee-kube-api-access-58j57\") pod \"test-operator-controller-manager-56f8bfcd9f-bxkwv\" (UID: \"d903ea5b-f13e-43d5-b65b-44093c70ddee\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966570 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.966823 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj2qx\" (UniqueName: \"kubernetes.io/projected/010cc4f5-4ac8-46e0-be08-80218981003e-kube-api-access-nj2qx\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.985987 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.987813 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f"] Jan 28 15:34:51 crc kubenswrapper[4656]: I0128 15:34:51.998446 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" event={"ID":"0cf0a4ad-85dd-47df-9307-e469f075a098","Type":"ContainerStarted","Data":"f2f1e0bb3ca27f3ebf55820f7580846953fbac34e758150c04de281d1465ff73"} Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.030842 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.032114 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.032117 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58j57\" (UniqueName: \"kubernetes.io/projected/d903ea5b-f13e-43d5-b65b-44093c70ddee-kube-api-access-58j57\") pod \"test-operator-controller-manager-56f8bfcd9f-bxkwv\" (UID: \"d903ea5b-f13e-43d5-b65b-44093c70ddee\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.040331 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.042442 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l2w7h" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.042446 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgcdr\" (UniqueName: \"kubernetes.io/projected/36524b9c-daa2-46d2-a732-b0964bb08873-kube-api-access-rgcdr\") pod \"watcher-operator-controller-manager-767b8bc766-xlrqs\" (UID: \"36524b9c-daa2-46d2-a732-b0964bb08873\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.053342 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.069474 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.069556 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj2qx\" (UniqueName: \"kubernetes.io/projected/010cc4f5-4ac8-46e0-be08-80218981003e-kube-api-access-nj2qx\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.069637 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88ck8\" (UniqueName: \"kubernetes.io/projected/0bb42d6d-259a-4532-b3e2-732c0f271d9a-kube-api-access-88ck8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sqgs8\" (UID: \"0bb42d6d-259a-4532-b3e2-732c0f271d9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.069666 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.069664 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.070208 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:52.570187942 +0000 UTC m=+983.078358746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.070133 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.070852 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:52.57083686 +0000 UTC m=+983.079007664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.083521 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.106569 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj2qx\" (UniqueName: \"kubernetes.io/projected/010cc4f5-4ac8-46e0-be08-80218981003e-kube-api-access-nj2qx\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.124171 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.170697 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88ck8\" (UniqueName: \"kubernetes.io/projected/0bb42d6d-259a-4532-b3e2-732c0f271d9a-kube-api-access-88ck8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sqgs8\" (UID: \"0bb42d6d-259a-4532-b3e2-732c0f271d9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.192476 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88ck8\" (UniqueName: \"kubernetes.io/projected/0bb42d6d-259a-4532-b3e2-732c0f271d9a-kube-api-access-88ck8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sqgs8\" (UID: \"0bb42d6d-259a-4532-b3e2-732c0f271d9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.271708 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.271924 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.271976 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:53.271961997 +0000 UTC m=+983.780132801 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.473083 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.479672 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.483913 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.484137 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.484213 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:34:54.484198484 +0000 UTC m=+984.992369288 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.487885 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod113ba11f_aeba_4710_b5f6_0991e9766d45.slice/crio-524e363e6badba1c95dda78bfe6f3f3a4e21928dc90ae7745399a092bbc4a39c WatchSource:0}: Error finding container 524e363e6badba1c95dda78bfe6f3f3a4e21928dc90ae7745399a092bbc4a39c: Status 404 returned error can't find the container with id 524e363e6badba1c95dda78bfe6f3f3a4e21928dc90ae7745399a092bbc4a39c Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.522432 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.542134 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw"] Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.568491 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfeab083_1268_47aa_938e_bd91036755de.slice/crio-8f7ed47cd17305ebe1bca1e7dc31291922d5c6a643c8b01969e7c2207c353147 WatchSource:0}: Error finding container 8f7ed47cd17305ebe1bca1e7dc31291922d5c6a643c8b01969e7c2207c353147: Status 404 returned error can't find the container with id 8f7ed47cd17305ebe1bca1e7dc31291922d5c6a643c8b01969e7c2207c353147 Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.576031 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ce4cdbc_3227_4679_8da9_9fd537996bd7.slice/crio-42a1354663bf4071a1659ff2622f41942c0de3af3d77c2049efd0c4c138a16aa WatchSource:0}: Error finding container 42a1354663bf4071a1659ff2622f41942c0de3af3d77c2049efd0c4c138a16aa: Status 404 returned error can't find the container with id 42a1354663bf4071a1659ff2622f41942c0de3af3d77c2049efd0c4c138a16aa Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.586377 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.586476 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.586718 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.586794 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:53.586773861 +0000 UTC m=+984.094944665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.587324 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: E0128 15:34:52.587379 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:53.587366188 +0000 UTC m=+984.095536992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.727706 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt"] Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.781779 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw"] Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.797614 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bfa2d1e_9ab0_478a_a19d_d031a1a8a312.slice/crio-4aa89c6005a02a290de18324861d96b92a7ce9f0c01261703d27fcb4e69057aa WatchSource:0}: Error finding container 4aa89c6005a02a290de18324861d96b92a7ce9f0c01261703d27fcb4e69057aa: Status 404 returned error can't find the container with id 4aa89c6005a02a290de18324861d96b92a7ce9f0c01261703d27fcb4e69057aa Jan 28 15:34:52 crc kubenswrapper[4656]: I0128 15:34:52.802320 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2"] Jan 28 15:34:52 crc kubenswrapper[4656]: W0128 15:34:52.818444 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5bdaf78_b590_429f_bc9b_46c67a369456.slice/crio-cefcc29db34c4ed3a690281892bd148cdbfd6f69de2de0b764dbf7090127c5ba WatchSource:0}: Error finding container cefcc29db34c4ed3a690281892bd148cdbfd6f69de2de0b764dbf7090127c5ba: Status 404 returned error can't find the container with id cefcc29db34c4ed3a690281892bd148cdbfd6f69de2de0b764dbf7090127c5ba Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.006946 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" event={"ID":"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312","Type":"ContainerStarted","Data":"4aa89c6005a02a290de18324861d96b92a7ce9f0c01261703d27fcb4e69057aa"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.011917 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" event={"ID":"cfeab083-1268-47aa-938e-bd91036755de","Type":"ContainerStarted","Data":"8f7ed47cd17305ebe1bca1e7dc31291922d5c6a643c8b01969e7c2207c353147"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.020124 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" event={"ID":"113ba11f-aeba-4710-b5f6-0991e9766d45","Type":"ContainerStarted","Data":"524e363e6badba1c95dda78bfe6f3f3a4e21928dc90ae7745399a092bbc4a39c"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.022431 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" event={"ID":"6ce4cdbc-3227-4679-8da9-9fd537996bd7","Type":"ContainerStarted","Data":"42a1354663bf4071a1659ff2622f41942c0de3af3d77c2049efd0c4c138a16aa"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.024368 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" event={"ID":"45be18b4-f249-4c09-8875-9959686d7f8f","Type":"ContainerStarted","Data":"84f597ea9c518a35c17623de73d211d5cc8d5c40af45b11cc6bbf1f274811720"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.026907 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" event={"ID":"a5bdaf78-b590-429f-bc9b-46c67a369456","Type":"ContainerStarted","Data":"cefcc29db34c4ed3a690281892bd148cdbfd6f69de2de0b764dbf7090127c5ba"} Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.196651 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.228589 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-7kctj"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.241883 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.300422 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.303364 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.303616 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.303731 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:55.303702314 +0000 UTC m=+985.811873118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.332145 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.342282 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2"] Jan 28 15:34:53 crc kubenswrapper[4656]: W0128 15:34:53.346651 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf37006c8_da19_4d17_a6d5_f4b075f2220f.slice/crio-28586ddab44f4bda0bbcda1f1c0df8e9a79f1719c86e0713082fde0da7ec19ff WatchSource:0}: Error finding container 28586ddab44f4bda0bbcda1f1c0df8e9a79f1719c86e0713082fde0da7ec19ff: Status 404 returned error can't find the container with id 28586ddab44f4bda0bbcda1f1c0df8e9a79f1719c86e0713082fde0da7ec19ff Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.350455 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.355431 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg"] Jan 28 15:34:53 crc kubenswrapper[4656]: W0128 15:34:53.363021 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod132d53b6_84ec_44d6_8f8f_762e9595919e.slice/crio-d875be74bd4a0979b68acb9161d4c83fa04df1baaed9a4603b9ac7762ead48c0 WatchSource:0}: Error finding container d875be74bd4a0979b68acb9161d4c83fa04df1baaed9a4603b9ac7762ead48c0: Status 404 returned error can't find the container with id d875be74bd4a0979b68acb9161d4c83fa04df1baaed9a4603b9ac7762ead48c0 Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.365988 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.375497 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.378628 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.385232 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps"] Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.390659 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8"] Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.430895 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6tv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-brxps_openstack-operators(92d1569e-5733-4779-b9fb-7feae2ea9317): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.432707 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:34:53 crc kubenswrapper[4656]: W0128 15:34:53.434359 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode97e04fa_1b66_4373_b31f_12089f1f5b2b.slice/crio-97b340278e2bb40f0d7d83fefb29266f7ab14c4a223e3d1e4d99bfd17a9f6cf0 WatchSource:0}: Error finding container 97b340278e2bb40f0d7d83fefb29266f7ab14c4a223e3d1e4d99bfd17a9f6cf0: Status 404 returned error can't find the container with id 97b340278e2bb40f0d7d83fefb29266f7ab14c4a223e3d1e4d99bfd17a9f6cf0 Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.435118 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58j57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-bxkwv_openstack-operators(d903ea5b-f13e-43d5-b65b-44093c70ddee): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.442770 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.508759 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sqgs8_openstack-operators(0bb42d6d-259a-4532-b3e2-732c0f271d9a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.519189 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5mm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-9q9vg_openstack-operators(e97e04fa-1b66-4373-b31f-12089f1f5b2b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.519833 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmpqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d69b9c5db-nmjz8_openstack-operators(5888a906-8758-4179-a30f-c2244ec46072): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.520351 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.528855 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.529007 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.626043 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:53 crc kubenswrapper[4656]: I0128 15:34:53.626209 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.626396 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.626466 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:55.626445397 +0000 UTC m=+986.134616201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.626865 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:53 crc kubenswrapper[4656]: E0128 15:34:53.626898 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:55.626890539 +0000 UTC m=+986.135061343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.037740 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" event={"ID":"0bb42d6d-259a-4532-b3e2-732c0f271d9a","Type":"ContainerStarted","Data":"fcdc5e1eb16256a60f9e9a1658d315a52e64c752300d9d4dfd3b913ced29cb48"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.039274 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" event={"ID":"36524b9c-daa2-46d2-a732-b0964bb08873","Type":"ContainerStarted","Data":"734e772f82404bdf09aa164624787ad8615ea04c812298bed5986036b0dff158"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.043523 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.048661 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" event={"ID":"f37006c8-da19-4d17-a6d5-f4b075f2220f","Type":"ContainerStarted","Data":"28586ddab44f4bda0bbcda1f1c0df8e9a79f1719c86e0713082fde0da7ec19ff"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.053818 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" event={"ID":"9954b0be-71f8-430b-a61f-28a95404c0f7","Type":"ContainerStarted","Data":"b523f4627da4d961e55e0ca354187359238aec0f72b438929321c01470630a54"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.055704 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" event={"ID":"d903ea5b-f13e-43d5-b65b-44093c70ddee","Type":"ContainerStarted","Data":"fb83bd714aaf5342241f75311dd5ed2767d4d421fe0639a9c3c1f63b595dea7b"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.057889 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.058973 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" event={"ID":"92d1569e-5733-4779-b9fb-7feae2ea9317","Type":"ContainerStarted","Data":"5b692085648e1fd94d9ffc63fc4391e244a5562d6eafb7dd6fb2fccf54f5e206"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.062479 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.063644 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" event={"ID":"ae47e69a-49f4-4b1a-8d68-068b5e99f22a","Type":"ContainerStarted","Data":"c3260c66ea07ec03ee65adbe4433875a08cad4e356327f3835038f8dbf7bf90c"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.065071 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" event={"ID":"132d53b6-84ec-44d6-8f8f-762e9595919e","Type":"ContainerStarted","Data":"d875be74bd4a0979b68acb9161d4c83fa04df1baaed9a4603b9ac7762ead48c0"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.068945 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" event={"ID":"0a83428f-312c-4590-beb3-8da4994c8951","Type":"ContainerStarted","Data":"1cc886493ea1a5aed490f85e19648c22a5327acc64ec656181951a16c9b99519"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.070728 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" event={"ID":"9277e421-df3a-49a2-81cc-86d0f7c65809","Type":"ContainerStarted","Data":"1f0d7b09e6fa64c21c251d4962b51ffbe4cd6c93feb51d936871b12b81ae87c4"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.072005 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" event={"ID":"5888a906-8758-4179-a30f-c2244ec46072","Type":"ContainerStarted","Data":"a3dcfab8c0cf196ae0aba68ef6d314f42a0da045728fe19a25d079389b063406"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.074739 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.077974 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" event={"ID":"50db0152-72c0-4fc3-9cd5-6b2c01127341","Type":"ContainerStarted","Data":"baceeba5404b339d1d658ace96297a872c3635685cca84d1659fded0b5def497"} Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.079498 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" event={"ID":"e97e04fa-1b66-4373-b31f-12089f1f5b2b","Type":"ContainerStarted","Data":"97b340278e2bb40f0d7d83fefb29266f7ab14c4a223e3d1e4d99bfd17a9f6cf0"} Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.081485 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:34:54 crc kubenswrapper[4656]: I0128 15:34:54.545291 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.545499 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:54 crc kubenswrapper[4656]: E0128 15:34:54.545597 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:34:58.545577017 +0000 UTC m=+989.053747821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094115 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094541 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094605 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094715 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.094737 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:34:55 crc kubenswrapper[4656]: I0128 15:34:55.358456 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.358735 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.358803 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:59.358783226 +0000 UTC m=+989.866954020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: I0128 15:34:55.662597 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.662812 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.663410 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:59.663383315 +0000 UTC m=+990.171554119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: I0128 15:34:55.663334 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.663679 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:55 crc kubenswrapper[4656]: E0128 15:34:55.663828 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:34:59.663803147 +0000 UTC m=+990.171974031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:34:58 crc kubenswrapper[4656]: I0128 15:34:58.621331 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:34:58 crc kubenswrapper[4656]: E0128 15:34:58.621542 4656 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:58 crc kubenswrapper[4656]: E0128 15:34:58.621910 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert podName:7341d49c-e9a9-4108-8a2c-bf808ccb49cf nodeName:}" failed. No retries permitted until 2026-01-28 15:35:06.621882776 +0000 UTC m=+997.130053580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert") pod "infra-operator-controller-manager-79955696d6-bfl2p" (UID: "7341d49c-e9a9-4108-8a2c-bf808ccb49cf") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: I0128 15:34:59.364788 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.365124 4656 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.365211 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert podName:3dcf45d4-628c-4071-b732-8ade2d3c4b4e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:07.36519024 +0000 UTC m=+997.873361044 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" (UID: "3dcf45d4-628c-4071-b732-8ade2d3c4b4e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: I0128 15:34:59.685707 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:59 crc kubenswrapper[4656]: I0128 15:34:59.685859 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.686252 4656 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.686331 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:07.686312285 +0000 UTC m=+998.194483079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "webhook-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.686856 4656 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:34:59 crc kubenswrapper[4656]: E0128 15:34:59.686900 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs podName:010cc4f5-4ac8-46e0-be08-80218981003e nodeName:}" failed. No retries permitted until 2026-01-28 15:35:07.686891722 +0000 UTC m=+998.195062526 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs") pod "openstack-operator-controller-manager-57d89bf95c-gltwn" (UID: "010cc4f5-4ac8-46e0-be08-80218981003e") : secret "metrics-server-cert" not found Jan 28 15:35:06 crc kubenswrapper[4656]: I0128 15:35:06.689851 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:06 crc kubenswrapper[4656]: I0128 15:35:06.695707 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7341d49c-e9a9-4108-8a2c-bf808ccb49cf-cert\") pod \"infra-operator-controller-manager-79955696d6-bfl2p\" (UID: \"7341d49c-e9a9-4108-8a2c-bf808ccb49cf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:06 crc kubenswrapper[4656]: I0128 15:35:06.810756 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.399196 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.407735 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3dcf45d4-628c-4071-b732-8ade2d3c4b4e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df55nv\" (UID: \"3dcf45d4-628c-4071-b732-8ade2d3c4b4e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.695328 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.703331 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.703426 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.713895 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-webhook-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.716925 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/010cc4f5-4ac8-46e0-be08-80218981003e-metrics-certs\") pod \"openstack-operator-controller-manager-57d89bf95c-gltwn\" (UID: \"010cc4f5-4ac8-46e0-be08-80218981003e\") " pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:07 crc kubenswrapper[4656]: I0128 15:35:07.756480 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:10 crc kubenswrapper[4656]: E0128 15:35:10.696462 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:22665b40ffeef62d1a612c1f9f0fa8e97ff95085fad123895d786b770f421fc0" Jan 28 15:35:10 crc kubenswrapper[4656]: E0128 15:35:10.697066 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:22665b40ffeef62d1a612c1f9f0fa8e97ff95085fad123895d786b770f421fc0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zvdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-694c5bfc85-rjfbj_openstack-operators(9954b0be-71f8-430b-a61f-28a95404c0f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:10 crc kubenswrapper[4656]: E0128 15:35:10.698283 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" podUID="9954b0be-71f8-430b-a61f-28a95404c0f7" Jan 28 15:35:11 crc kubenswrapper[4656]: I0128 15:35:11.269803 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:35:11 crc kubenswrapper[4656]: I0128 15:35:11.270220 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:35:11 crc kubenswrapper[4656]: E0128 15:35:11.419154 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:22665b40ffeef62d1a612c1f9f0fa8e97ff95085fad123895d786b770f421fc0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" podUID="9954b0be-71f8-430b-a61f-28a95404c0f7" Jan 28 15:35:12 crc kubenswrapper[4656]: E0128 15:35:12.089599 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da" Jan 28 15:35:12 crc kubenswrapper[4656]: E0128 15:35:12.089869 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnlpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-765668569f-7kctj_openstack-operators(0a83428f-312c-4590-beb3-8da4994c8951): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:12 crc kubenswrapper[4656]: E0128 15:35:12.091045 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" podUID="0a83428f-312c-4590-beb3-8da4994c8951" Jan 28 15:35:12 crc kubenswrapper[4656]: E0128 15:35:12.405028 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:2e1a77365c3b08ff39892565abfc72b72e969f623e58a2663fb93890371fc9da\\\"\"" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" podUID="0a83428f-312c-4590-beb3-8da4994c8951" Jan 28 15:35:13 crc kubenswrapper[4656]: E0128 15:35:13.347742 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 28 15:35:13 crc kubenswrapper[4656]: E0128 15:35:13.347973 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kng9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-p92zm_openstack-operators(9277e421-df3a-49a2-81cc-86d0f7c65809): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:13 crc kubenswrapper[4656]: E0128 15:35:13.349213 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" podUID="9277e421-df3a-49a2-81cc-86d0f7c65809" Jan 28 15:35:13 crc kubenswrapper[4656]: E0128 15:35:13.411849 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" podUID="9277e421-df3a-49a2-81cc-86d0f7c65809" Jan 28 15:35:14 crc kubenswrapper[4656]: E0128 15:35:14.860866 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d" Jan 28 15:35:14 crc kubenswrapper[4656]: E0128 15:35:14.861285 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rn4q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66dfbd6f5d-r8cjw_openstack-operators(cfeab083-1268-47aa-938e-bd91036755de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:14 crc kubenswrapper[4656]: E0128 15:35:14.862830 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" podUID="cfeab083-1268-47aa-938e-bd91036755de" Jan 28 15:35:15 crc kubenswrapper[4656]: E0128 15:35:15.447951 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" podUID="cfeab083-1268-47aa-938e-bd91036755de" Jan 28 15:35:15 crc kubenswrapper[4656]: E0128 15:35:15.503384 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4" Jan 28 15:35:15 crc kubenswrapper[4656]: E0128 15:35:15.503885 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rgcdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-767b8bc766-xlrqs_openstack-operators(36524b9c-daa2-46d2-a732-b0964bb08873): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:15 crc kubenswrapper[4656]: E0128 15:35:15.506344 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" podUID="36524b9c-daa2-46d2-a732-b0964bb08873" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.116714 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:5f48b6af05a584d3da5c973f83195d999cc151aa0f187cabc8002cb46d60afe5" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.117403 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:5f48b6af05a584d3da5c973f83195d999cc151aa0f187cabc8002cb46d60afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkhq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-958664b5-m9jtk_openstack-operators(ae47e69a-49f4-4b1a-8d68-068b5e99f22a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.118600 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" podUID="ae47e69a-49f4-4b1a-8d68-068b5e99f22a" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.447506 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" podUID="36524b9c-daa2-46d2-a732-b0964bb08873" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.447793 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:5f48b6af05a584d3da5c973f83195d999cc151aa0f187cabc8002cb46d60afe5\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" podUID="ae47e69a-49f4-4b1a-8d68-068b5e99f22a" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.836026 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.836311 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6l2vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-9q5lw_openstack-operators(1bfa2d1e-9ab0-478a-a19d-d031a1a8a312): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:16 crc kubenswrapper[4656]: E0128 15:35:16.837763 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" podUID="1bfa2d1e-9ab0-478a-a19d-d031a1a8a312" Jan 28 15:35:17 crc kubenswrapper[4656]: E0128 15:35:17.455093 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" podUID="1bfa2d1e-9ab0-478a-a19d-d031a1a8a312" Jan 28 15:35:19 crc kubenswrapper[4656]: E0128 15:35:19.989041 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:6da7ec7bf701fe1dd489852a16429f163a69073fae67b872dca4b080cc3514ad" Jan 28 15:35:19 crc kubenswrapper[4656]: E0128 15:35:19.989470 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:6da7ec7bf701fe1dd489852a16429f163a69073fae67b872dca4b080cc3514ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2txf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-f6487bd57-jwv7f_openstack-operators(0cf0a4ad-85dd-47df-9307-e469f075a098): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:19 crc kubenswrapper[4656]: E0128 15:35:19.991123 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" podUID="0cf0a4ad-85dd-47df-9307-e469f075a098" Jan 28 15:35:20 crc kubenswrapper[4656]: E0128 15:35:20.473948 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:6da7ec7bf701fe1dd489852a16429f163a69073fae67b872dca4b080cc3514ad\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" podUID="0cf0a4ad-85dd-47df-9307-e469f075a098" Jan 28 15:35:20 crc kubenswrapper[4656]: E0128 15:35:20.502982 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:8a7e2637765333c555b0b932c2bfc789235aea2c7276961657a03ef1352a7264" Jan 28 15:35:20 crc kubenswrapper[4656]: E0128 15:35:20.503290 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:8a7e2637765333c555b0b932c2bfc789235aea2c7276961657a03ef1352a7264,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pck4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-6db5dbd896-cfpjq_openstack-operators(113ba11f-aeba-4710-b5f6-0991e9766d45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:20 crc kubenswrapper[4656]: E0128 15:35:20.505170 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" podUID="113ba11f-aeba-4710-b5f6-0991e9766d45" Jan 28 15:35:21 crc kubenswrapper[4656]: E0128 15:35:21.488086 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:8a7e2637765333c555b0b932c2bfc789235aea2c7276961657a03ef1352a7264\\\"\"" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" podUID="113ba11f-aeba-4710-b5f6-0991e9766d45" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.106639 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:c7804813a3bba8910a47a5f32bd528335e18397f93cf5f7e7181d3d2c209b59b" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.106938 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:c7804813a3bba8910a47a5f32bd528335e18397f93cf5f7e7181d3d2c209b59b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcqzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5c765b4558-wjspj_openstack-operators(50db0152-72c0-4fc3-9cd5-6b2c01127341): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.108155 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" podUID="50db0152-72c0-4fc3-9cd5-6b2c01127341" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.486745 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:c7804813a3bba8910a47a5f32bd528335e18397f93cf5f7e7181d3d2c209b59b\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" podUID="50db0152-72c0-4fc3-9cd5-6b2c01127341" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.766205 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:f832a7a2326f1b84e7963fdea324e2a5285d636b366f059465c98299ae2d2d63" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.766502 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:f832a7a2326f1b84e7963fdea324e2a5285d636b366f059465c98299ae2d2d63,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mbknf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b84b46695-86ht2_openstack-operators(a5bdaf78-b590-429f-bc9b-46c67a369456): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:22 crc kubenswrapper[4656]: E0128 15:35:22.767739 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" podUID="a5bdaf78-b590-429f-bc9b-46c67a369456" Jan 28 15:35:23 crc kubenswrapper[4656]: E0128 15:35:23.493787 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:f832a7a2326f1b84e7963fdea324e2a5285d636b366f059465c98299ae2d2d63\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" podUID="a5bdaf78-b590-429f-bc9b-46c67a369456" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.071431 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.072145 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58j57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-bxkwv_openstack-operators(d903ea5b-f13e-43d5-b65b-44093c70ddee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.074072 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.743698 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.744113 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5mm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-9q9vg_openstack-operators(e97e04fa-1b66-4373-b31f-12089f1f5b2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:26 crc kubenswrapper[4656]: E0128 15:35:26.745501 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:35:27 crc kubenswrapper[4656]: I0128 15:35:27.177526 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:35:29 crc kubenswrapper[4656]: E0128 15:35:29.417715 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba" Jan 28 15:35:29 crc kubenswrapper[4656]: E0128 15:35:29.417980 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmpqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d69b9c5db-nmjz8_openstack-operators(5888a906-8758-4179-a30f-c2244ec46072): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:29 crc kubenswrapper[4656]: E0128 15:35:29.419256 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:35:30 crc kubenswrapper[4656]: E0128 15:35:30.113456 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 28 15:35:30 crc kubenswrapper[4656]: E0128 15:35:30.113755 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6tv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-brxps_openstack-operators(92d1569e-5733-4779-b9fb-7feae2ea9317): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:30 crc kubenswrapper[4656]: E0128 15:35:30.116127 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.009783 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.010081 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wfrvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-ddcbfd695-gqr2d_openstack-operators(f37006c8-da19-4d17-a6d5-f4b075f2220f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.011993 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" podUID="f37006c8-da19-4d17-a6d5-f4b075f2220f" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.667461 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.668298 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88ck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sqgs8_openstack-operators(0bb42d6d-259a-4532-b3e2-732c0f271d9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:35:31 crc kubenswrapper[4656]: E0128 15:35:31.669661 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:35:32 crc kubenswrapper[4656]: E0128 15:35:32.165448 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" podUID="f37006c8-da19-4d17-a6d5-f4b075f2220f" Jan 28 15:35:32 crc kubenswrapper[4656]: I0128 15:35:32.863047 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn"] Jan 28 15:35:32 crc kubenswrapper[4656]: I0128 15:35:32.922245 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p"] Jan 28 15:35:33 crc kubenswrapper[4656]: I0128 15:35:33.047762 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv"] Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.587206 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" event={"ID":"7341d49c-e9a9-4108-8a2c-bf808ccb49cf","Type":"ContainerStarted","Data":"8a24e9b05ad66110a394f610b8ea77f4909847e91224300529898871d9721271"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.593452 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" event={"ID":"132d53b6-84ec-44d6-8f8f-762e9595919e","Type":"ContainerStarted","Data":"06bc9aef0fd98a9893b51d390852a16af89e5d3814041c2bfbae1fe1a03756ac"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.594543 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.600640 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" event={"ID":"6ce4cdbc-3227-4679-8da9-9fd537996bd7","Type":"ContainerStarted","Data":"6de222a7b02d2f681bf300c9ff4ba45c9a6ce04f723e561461887b4652f25b25"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.600894 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.603528 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" event={"ID":"010cc4f5-4ac8-46e0-be08-80218981003e","Type":"ContainerStarted","Data":"d69367ab05162c41e71aa5477a4a928046a3d4abb6a9d3032ff4f1501dc11c5f"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.613593 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" event={"ID":"3dcf45d4-628c-4071-b732-8ade2d3c4b4e","Type":"ContainerStarted","Data":"da1921127dbed16d5469f5358682b34de29126e475b84f45dc4de7de13b32d70"} Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.660454 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" podStartSLOduration=14.476789286 podStartE2EDuration="44.660424572s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.578029709 +0000 UTC m=+983.086200513" lastFinishedPulling="2026-01-28 15:35:22.761664995 +0000 UTC m=+1013.269835799" observedRunningTime="2026-01-28 15:35:34.656373845 +0000 UTC m=+1025.164544649" watchObservedRunningTime="2026-01-28 15:35:34.660424572 +0000 UTC m=+1025.168595376" Jan 28 15:35:34 crc kubenswrapper[4656]: I0128 15:35:34.664078 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" podStartSLOduration=14.267700502 podStartE2EDuration="43.664060457s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.365465745 +0000 UTC m=+983.873636549" lastFinishedPulling="2026-01-28 15:35:22.7618257 +0000 UTC m=+1013.269996504" observedRunningTime="2026-01-28 15:35:34.621711716 +0000 UTC m=+1025.129882520" watchObservedRunningTime="2026-01-28 15:35:34.664060457 +0000 UTC m=+1025.172231261" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.628516 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" event={"ID":"9954b0be-71f8-430b-a61f-28a95404c0f7","Type":"ContainerStarted","Data":"8f9368524f080563ff26e3aae2c72d048be1b02b0b1c8966c195f138c8517388"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.630202 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.636334 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" event={"ID":"0cf0a4ad-85dd-47df-9307-e469f075a098","Type":"ContainerStarted","Data":"aaf265a5d181e454037130389243c88ca45edeb1e3ebce32ee221f6918fb280f"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.636595 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.639333 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" event={"ID":"0a83428f-312c-4590-beb3-8da4994c8951","Type":"ContainerStarted","Data":"2fef2fe2438c62dfe85f51ac45a0fcf784ca834800a760f77dd8d118a210ac68"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.640078 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.641878 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" event={"ID":"a5bdaf78-b590-429f-bc9b-46c67a369456","Type":"ContainerStarted","Data":"3efb0e87471a3f77df1f1448c729d0de9b0fadbfd0eef8fb9d1282a564a62a3f"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.642091 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.655137 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" event={"ID":"ae47e69a-49f4-4b1a-8d68-068b5e99f22a","Type":"ContainerStarted","Data":"7d846cf0ddef2965d73cfcf62a660b04dfb82183415597e509fb6c8210cf1355"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.655425 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.675752 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" event={"ID":"1bfa2d1e-9ab0-478a-a19d-d031a1a8a312","Type":"ContainerStarted","Data":"947cad0aab5477fd30979ad90cc08830578b2eff12acc1814e39ea1a3a77fac4"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.676521 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.696077 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" podStartSLOduration=4.822803829 podStartE2EDuration="45.696036571s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.266384429 +0000 UTC m=+983.774555233" lastFinishedPulling="2026-01-28 15:35:34.139617171 +0000 UTC m=+1024.647787975" observedRunningTime="2026-01-28 15:35:35.680292677 +0000 UTC m=+1026.188463491" watchObservedRunningTime="2026-01-28 15:35:35.696036571 +0000 UTC m=+1026.204207375" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.705510 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" event={"ID":"45be18b4-f249-4c09-8875-9959686d7f8f","Type":"ContainerStarted","Data":"ce8da8f6bc7bc99ea80525ab5ecd55ccd2672da02192e3c07f559ec13b4390de"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.705798 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.708935 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" event={"ID":"36524b9c-daa2-46d2-a732-b0964bb08873","Type":"ContainerStarted","Data":"26972cd96158dc81f8523c3f1d1d8c68a072edba0524da7cbe6b604eccc61174"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.709761 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.710814 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" event={"ID":"9277e421-df3a-49a2-81cc-86d0f7c65809","Type":"ContainerStarted","Data":"6919851838ff21eaaa344e9ba0a4e2476e0a23a9f277058bfbf2eff559b11944"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.711391 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.712412 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" event={"ID":"cfeab083-1268-47aa-938e-bd91036755de","Type":"ContainerStarted","Data":"30f239c2e7c47eacbbfd6d4271678d731a84ce3799905d86890140eda7ffbc51"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.712746 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.714937 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" event={"ID":"113ba11f-aeba-4710-b5f6-0991e9766d45","Type":"ContainerStarted","Data":"2a341b3a274d22a62e3733c2fe078d1994c17cef4e4d8446367851ebcbfb8447"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.715305 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.716923 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" event={"ID":"010cc4f5-4ac8-46e0-be08-80218981003e","Type":"ContainerStarted","Data":"3af250c74fb21a73f8d5a5cabf839d42eb8fb09350569a0d13d833257a899042"} Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.716946 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.910497 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" podStartSLOduration=2.941901776 podStartE2EDuration="45.910471991s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:51.998619839 +0000 UTC m=+982.506790643" lastFinishedPulling="2026-01-28 15:35:34.967190044 +0000 UTC m=+1025.475360858" observedRunningTime="2026-01-28 15:35:35.885601815 +0000 UTC m=+1026.393772609" watchObservedRunningTime="2026-01-28 15:35:35.910471991 +0000 UTC m=+1026.418642795" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.914869 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" podStartSLOduration=4.968962171 podStartE2EDuration="45.914843787s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.321125867 +0000 UTC m=+983.829296671" lastFinishedPulling="2026-01-28 15:35:34.267007483 +0000 UTC m=+1024.775178287" observedRunningTime="2026-01-28 15:35:35.910801571 +0000 UTC m=+1026.418972375" watchObservedRunningTime="2026-01-28 15:35:35.914843787 +0000 UTC m=+1026.423014591" Jan 28 15:35:35 crc kubenswrapper[4656]: I0128 15:35:35.944739 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" podStartSLOduration=3.813766146 podStartE2EDuration="45.944715498s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.821596219 +0000 UTC m=+983.329767023" lastFinishedPulling="2026-01-28 15:35:34.952545571 +0000 UTC m=+1025.460716375" observedRunningTime="2026-01-28 15:35:35.943907095 +0000 UTC m=+1026.452077899" watchObservedRunningTime="2026-01-28 15:35:35.944715498 +0000 UTC m=+1026.452886302" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.031342 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" podStartSLOduration=5.170970405 podStartE2EDuration="46.031325675s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.299436482 +0000 UTC m=+983.807607286" lastFinishedPulling="2026-01-28 15:35:34.159791742 +0000 UTC m=+1024.667962556" observedRunningTime="2026-01-28 15:35:35.98189 +0000 UTC m=+1026.490060804" watchObservedRunningTime="2026-01-28 15:35:36.031325675 +0000 UTC m=+1026.539496469" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.034263 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" podStartSLOduration=4.568727836 podStartE2EDuration="46.034253879s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.801049177 +0000 UTC m=+983.309219981" lastFinishedPulling="2026-01-28 15:35:34.26657522 +0000 UTC m=+1024.774746024" observedRunningTime="2026-01-28 15:35:36.02837953 +0000 UTC m=+1026.536550334" watchObservedRunningTime="2026-01-28 15:35:36.034253879 +0000 UTC m=+1026.542424683" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.101425 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" podStartSLOduration=3.688565207 podStartE2EDuration="46.101398984s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.558559468 +0000 UTC m=+983.066730272" lastFinishedPulling="2026-01-28 15:35:34.971393245 +0000 UTC m=+1025.479564049" observedRunningTime="2026-01-28 15:35:36.09674718 +0000 UTC m=+1026.604917994" watchObservedRunningTime="2026-01-28 15:35:36.101398984 +0000 UTC m=+1026.609569788" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.210660 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" podStartSLOduration=9.553388433 podStartE2EDuration="46.210623782s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.755231846 +0000 UTC m=+983.263402650" lastFinishedPulling="2026-01-28 15:35:29.412467155 +0000 UTC m=+1019.920637999" observedRunningTime="2026-01-28 15:35:36.206287687 +0000 UTC m=+1026.714458521" watchObservedRunningTime="2026-01-28 15:35:36.210623782 +0000 UTC m=+1026.718794586" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.312138 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" podStartSLOduration=45.312122458 podStartE2EDuration="45.312122458s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:35:36.308059841 +0000 UTC m=+1026.816230635" watchObservedRunningTime="2026-01-28 15:35:36.312122458 +0000 UTC m=+1026.820293262" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.313198 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" podStartSLOduration=4.36823662 podStartE2EDuration="45.313193539s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.408356371 +0000 UTC m=+983.916527175" lastFinishedPulling="2026-01-28 15:35:34.35331329 +0000 UTC m=+1024.861484094" observedRunningTime="2026-01-28 15:35:36.264791984 +0000 UTC m=+1026.772962788" watchObservedRunningTime="2026-01-28 15:35:36.313193539 +0000 UTC m=+1026.821364343" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.337270 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" podStartSLOduration=4.621088065 podStartE2EDuration="46.337244542s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:52.570645326 +0000 UTC m=+983.078816130" lastFinishedPulling="2026-01-28 15:35:34.286801803 +0000 UTC m=+1024.794972607" observedRunningTime="2026-01-28 15:35:36.335714768 +0000 UTC m=+1026.843885592" watchObservedRunningTime="2026-01-28 15:35:36.337244542 +0000 UTC m=+1026.845415346" Jan 28 15:35:36 crc kubenswrapper[4656]: I0128 15:35:36.384965 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" podStartSLOduration=5.514798654 podStartE2EDuration="46.384935707s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.397285682 +0000 UTC m=+983.905456486" lastFinishedPulling="2026-01-28 15:35:34.267422735 +0000 UTC m=+1024.775593539" observedRunningTime="2026-01-28 15:35:36.378571133 +0000 UTC m=+1026.886741937" watchObservedRunningTime="2026-01-28 15:35:36.384935707 +0000 UTC m=+1026.893106511" Jan 28 15:35:37 crc kubenswrapper[4656]: E0128 15:35:37.172609 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podUID="d903ea5b-f13e-43d5-b65b-44093c70ddee" Jan 28 15:35:38 crc kubenswrapper[4656]: E0128 15:35:38.179211 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podUID="e97e04fa-1b66-4373-b31f-12089f1f5b2b" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.759425 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" event={"ID":"50db0152-72c0-4fc3-9cd5-6b2c01127341","Type":"ContainerStarted","Data":"94e3f215727027354e291f5bf9e4840840f6c34110ac5719885d13b2fc239d38"} Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.762206 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.764101 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" event={"ID":"3dcf45d4-628c-4071-b732-8ade2d3c4b4e","Type":"ContainerStarted","Data":"810a9250e2ddc43ffba09e5337730db8e84bd558ac00b7b4993fe50130dd0d7a"} Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.764837 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.766860 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" event={"ID":"7341d49c-e9a9-4108-8a2c-bf808ccb49cf","Type":"ContainerStarted","Data":"e19cc7d0ef107f8fa8c6b91e277cddf9c497e21c3ea8fbf9a42e7239228205eb"} Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.767139 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.781838 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" podStartSLOduration=4.397470039 podStartE2EDuration="49.781808392s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.214881374 +0000 UTC m=+983.723052178" lastFinishedPulling="2026-01-28 15:35:38.599219727 +0000 UTC m=+1029.107390531" observedRunningTime="2026-01-28 15:35:39.777811847 +0000 UTC m=+1030.285982661" watchObservedRunningTime="2026-01-28 15:35:39.781808392 +0000 UTC m=+1030.289979206" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.816542 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" podStartSLOduration=43.817412036 podStartE2EDuration="48.816506132s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:35:34.260784443 +0000 UTC m=+1024.768955247" lastFinishedPulling="2026-01-28 15:35:39.259878539 +0000 UTC m=+1029.768049343" observedRunningTime="2026-01-28 15:35:39.809937223 +0000 UTC m=+1030.318108037" watchObservedRunningTime="2026-01-28 15:35:39.816506132 +0000 UTC m=+1030.324676936" Jan 28 15:35:39 crc kubenswrapper[4656]: I0128 15:35:39.848387 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" podStartSLOduration=44.845192887 podStartE2EDuration="49.84836548s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:35:34.261047891 +0000 UTC m=+1024.769218695" lastFinishedPulling="2026-01-28 15:35:39.264220484 +0000 UTC m=+1029.772391288" observedRunningTime="2026-01-28 15:35:39.837983251 +0000 UTC m=+1030.346154075" watchObservedRunningTime="2026-01-28 15:35:39.84836548 +0000 UTC m=+1030.356536284" Jan 28 15:35:40 crc kubenswrapper[4656]: I0128 15:35:40.672501 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-jwv7f" Jan 28 15:35:40 crc kubenswrapper[4656]: I0128 15:35:40.674974 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-hd57q" Jan 28 15:35:40 crc kubenswrapper[4656]: I0128 15:35:40.803883 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-cfpjq" Jan 28 15:35:40 crc kubenswrapper[4656]: I0128 15:35:40.898326 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9q5lw" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.050641 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-r8cjw" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.131880 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-xjnqt" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.158945 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b84b46695-86ht2" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.263865 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.264362 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.264903 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-p92zm" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.359843 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-rjfbj" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.425035 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-958664b5-m9jtk" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.529721 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-765668569f-7kctj" Jan 28 15:35:41 crc kubenswrapper[4656]: I0128 15:35:41.697861 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rmvr2" Jan 28 15:35:42 crc kubenswrapper[4656]: I0128 15:35:42.127652 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-xlrqs" Jan 28 15:35:44 crc kubenswrapper[4656]: E0128 15:35:44.172438 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podUID="92d1569e-5733-4779-b9fb-7feae2ea9317" Jan 28 15:35:44 crc kubenswrapper[4656]: E0128 15:35:44.172454 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:c9d639f3d01f7a4f139a8b7fb751ca880893f7b9a4e596d6a5304534e46392ba\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podUID="5888a906-8758-4179-a30f-c2244ec46072" Jan 28 15:35:45 crc kubenswrapper[4656]: E0128 15:35:45.172692 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podUID="0bb42d6d-259a-4532-b3e2-732c0f271d9a" Jan 28 15:35:46 crc kubenswrapper[4656]: I0128 15:35:46.816909 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-bfl2p" Jan 28 15:35:47 crc kubenswrapper[4656]: I0128 15:35:47.702602 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df55nv" Jan 28 15:35:47 crc kubenswrapper[4656]: I0128 15:35:47.775663 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57d89bf95c-gltwn" Jan 28 15:35:48 crc kubenswrapper[4656]: I0128 15:35:48.841914 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" event={"ID":"d903ea5b-f13e-43d5-b65b-44093c70ddee","Type":"ContainerStarted","Data":"d7c9ea7da01181d4c970b8e1f7644761eacbcbc39d44a66a216d26e1920fbdd7"} Jan 28 15:35:48 crc kubenswrapper[4656]: I0128 15:35:48.843978 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:35:48 crc kubenswrapper[4656]: I0128 15:35:48.863001 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" podStartSLOduration=2.606713173 podStartE2EDuration="57.862978642s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.434654459 +0000 UTC m=+983.942825273" lastFinishedPulling="2026-01-28 15:35:48.690919938 +0000 UTC m=+1039.199090742" observedRunningTime="2026-01-28 15:35:48.857956909 +0000 UTC m=+1039.366127733" watchObservedRunningTime="2026-01-28 15:35:48.862978642 +0000 UTC m=+1039.371149456" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.666493 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-wjspj" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.869592 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" event={"ID":"e97e04fa-1b66-4373-b31f-12089f1f5b2b","Type":"ContainerStarted","Data":"8db670b605dc99fdaaeead63f9fceb62223aae6d0ef80bd608d29ec722223f3e"} Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.870345 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.871438 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" event={"ID":"f37006c8-da19-4d17-a6d5-f4b075f2220f","Type":"ContainerStarted","Data":"4c87cbfa892f70b190b81ced3ca1efcfbd78b32612776036a134abf7eaff7182"} Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.871692 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.900645 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" podStartSLOduration=3.190736296 podStartE2EDuration="1m0.900625316s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.518969499 +0000 UTC m=+984.027140303" lastFinishedPulling="2026-01-28 15:35:51.228858519 +0000 UTC m=+1041.737029323" observedRunningTime="2026-01-28 15:35:51.895388936 +0000 UTC m=+1042.403559750" watchObservedRunningTime="2026-01-28 15:35:51.900625316 +0000 UTC m=+1042.408796110" Jan 28 15:35:51 crc kubenswrapper[4656]: I0128 15:35:51.924501 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" podStartSLOduration=4.061959886 podStartE2EDuration="1m1.924481486s" podCreationTimestamp="2026-01-28 15:34:50 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.365134985 +0000 UTC m=+983.873305789" lastFinishedPulling="2026-01-28 15:35:51.227656585 +0000 UTC m=+1041.735827389" observedRunningTime="2026-01-28 15:35:51.921012157 +0000 UTC m=+1042.429182971" watchObservedRunningTime="2026-01-28 15:35:51.924481486 +0000 UTC m=+1042.432652290" Jan 28 15:35:58 crc kubenswrapper[4656]: I0128 15:35:58.961499 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" event={"ID":"5888a906-8758-4179-a30f-c2244ec46072","Type":"ContainerStarted","Data":"cf12d8018e9f695069c711c5279902bbbe135cbdb396676262bc13eeaaab94f3"} Jan 28 15:35:58 crc kubenswrapper[4656]: I0128 15:35:58.962461 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:35:58 crc kubenswrapper[4656]: I0128 15:35:58.979569 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" podStartSLOduration=3.50138402 podStartE2EDuration="1m7.979547479s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.51970106 +0000 UTC m=+984.027871864" lastFinishedPulling="2026-01-28 15:35:57.997864519 +0000 UTC m=+1048.506035323" observedRunningTime="2026-01-28 15:35:58.975478853 +0000 UTC m=+1049.483649667" watchObservedRunningTime="2026-01-28 15:35:58.979547479 +0000 UTC m=+1049.487718273" Jan 28 15:35:59 crc kubenswrapper[4656]: I0128 15:35:59.970821 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" event={"ID":"0bb42d6d-259a-4532-b3e2-732c0f271d9a","Type":"ContainerStarted","Data":"f32e28c76d4fe3fbdcfa35c3d3dbcf7521926127938cfec3194fb719800529f1"} Jan 28 15:35:59 crc kubenswrapper[4656]: I0128 15:35:59.973289 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" event={"ID":"92d1569e-5733-4779-b9fb-7feae2ea9317","Type":"ContainerStarted","Data":"8823d6a69ee5db129ec37ce7257dffd633c2931225beaa105a9c8bf3d8e7dd9d"} Jan 28 15:35:59 crc kubenswrapper[4656]: I0128 15:35:59.973578 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:35:59 crc kubenswrapper[4656]: I0128 15:35:59.992390 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sqgs8" podStartSLOduration=3.53110897 podStartE2EDuration="1m8.992360708s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.490966432 +0000 UTC m=+983.999137236" lastFinishedPulling="2026-01-28 15:35:58.95221817 +0000 UTC m=+1049.460388974" observedRunningTime="2026-01-28 15:35:59.986667746 +0000 UTC m=+1050.494838550" watchObservedRunningTime="2026-01-28 15:35:59.992360708 +0000 UTC m=+1050.500531512" Jan 28 15:36:00 crc kubenswrapper[4656]: I0128 15:36:00.011246 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" podStartSLOduration=2.750269414 podStartE2EDuration="1m9.011222206s" podCreationTimestamp="2026-01-28 15:34:51 +0000 UTC" firstStartedPulling="2026-01-28 15:34:53.430665574 +0000 UTC m=+983.938836378" lastFinishedPulling="2026-01-28 15:35:59.691618366 +0000 UTC m=+1050.199789170" observedRunningTime="2026-01-28 15:36:00.010079773 +0000 UTC m=+1050.518250577" watchObservedRunningTime="2026-01-28 15:36:00.011222206 +0000 UTC m=+1050.519393010" Jan 28 15:36:01 crc kubenswrapper[4656]: I0128 15:36:01.529809 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-gqr2d" Jan 28 15:36:02 crc kubenswrapper[4656]: I0128 15:36:02.057898 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9q9vg" Jan 28 15:36:02 crc kubenswrapper[4656]: I0128 15:36:02.090752 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bxkwv" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.264500 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.265213 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.265292 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.266079 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.266216 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f" gracePeriod=600 Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.814189 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-brxps" Jan 28 15:36:11 crc kubenswrapper[4656]: I0128 15:36:11.991095 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6d69b9c5db-nmjz8" Jan 28 15:36:12 crc kubenswrapper[4656]: I0128 15:36:12.122463 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f" exitCode=0 Jan 28 15:36:12 crc kubenswrapper[4656]: I0128 15:36:12.122544 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f"} Jan 28 15:36:12 crc kubenswrapper[4656]: I0128 15:36:12.122837 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441"} Jan 28 15:36:12 crc kubenswrapper[4656]: I0128 15:36:12.122956 4656 scope.go:117] "RemoveContainer" containerID="87c17d0db94ead712d442056e9a18e38055b40f27c59008c11f1ea77ac6037d0" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.741247 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.742944 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.746899 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.748253 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.748555 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5dbss" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.752999 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.774350 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.801224 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.801308 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.888134 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.893334 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.896059 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.902908 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.903019 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.903758 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.909742 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:36:27 crc kubenswrapper[4656]: I0128 15:36:27.942648 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") pod \"dnsmasq-dns-675f4bcbfc-59sjg\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.004756 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.004806 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.005120 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.066518 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.106917 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.106983 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.107061 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.108075 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.108717 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.124674 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") pod \"dnsmasq-dns-78dd6ddcc-7lfkl\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.210499 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.828405 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:36:28 crc kubenswrapper[4656]: W0128 15:36:28.897479 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a88808_6249_4879_b857_55182475c4a5.slice/crio-f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc WatchSource:0}: Error finding container f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc: Status 404 returned error can't find the container with id f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc Jan 28 15:36:28 crc kubenswrapper[4656]: I0128 15:36:28.898154 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:36:29 crc kubenswrapper[4656]: I0128 15:36:29.249628 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" event={"ID":"b8a88808-6249-4879-b857-55182475c4a5","Type":"ContainerStarted","Data":"f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc"} Jan 28 15:36:29 crc kubenswrapper[4656]: I0128 15:36:29.250997 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" event={"ID":"cc574140-1aed-42c6-baab-b39625a3ae3b","Type":"ContainerStarted","Data":"b9ce7457339510772a4afabfed4cb6250dc208ca5dccec3d956588e40ccb2fc7"} Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.077906 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.113461 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.115191 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.168885 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.257504 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.257627 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.257690 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.361134 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.361244 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.361312 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.362853 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.363664 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.396745 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") pod \"dnsmasq-dns-5ccc8479f9-vnkgc\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.511656 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.754706 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.802740 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.813209 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.831703 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.975940 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.976248 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:30 crc kubenswrapper[4656]: I0128 15:36:30.976273 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.077622 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.077664 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.077746 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.078838 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.079352 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.134964 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") pod \"dnsmasq-dns-57d769cc4f-w2f4q\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.148069 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.211748 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.313045 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.318682 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.328683 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-w59zb" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.336062 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.351547 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.351774 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.351904 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.352059 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.353453 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.408080 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" event={"ID":"c1211ebf-bf2b-422a-8bf6-8ff685d27325","Type":"ContainerStarted","Data":"63a5615d0933628808006a1d7c767190e6e914053a00c94d355b26e0c1287a0b"} Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.414056 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497062 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497119 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497138 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497168 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497194 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07f26e32-4b43-4591-9ed2-6426a96e596e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497326 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnqw5\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-kube-api-access-cnqw5\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497386 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497455 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497476 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497583 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.497667 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07f26e32-4b43-4591-9ed2-6426a96e596e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599649 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599726 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07f26e32-4b43-4591-9ed2-6426a96e596e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599756 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599784 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599801 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599817 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599833 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07f26e32-4b43-4591-9ed2-6426a96e596e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599861 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnqw5\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-kube-api-access-cnqw5\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599876 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599899 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.599919 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.600304 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.601083 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.601634 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.604323 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.604546 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.624884 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.626526 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07f26e32-4b43-4591-9ed2-6426a96e596e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.627718 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07f26e32-4b43-4591-9ed2-6426a96e596e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.628603 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07f26e32-4b43-4591-9ed2-6426a96e596e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.629277 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.668969 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.684230 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnqw5\" (UniqueName: \"kubernetes.io/projected/07f26e32-4b43-4591-9ed2-6426a96e596e-kube-api-access-cnqw5\") pod \"rabbitmq-cell1-server-0\" (UID: \"07f26e32-4b43-4591-9ed2-6426a96e596e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.954069 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:36:31 crc kubenswrapper[4656]: I0128 15:36:31.955285 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.023141 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.025657 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.029038 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.038044 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.039143 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2swg5" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.043706 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.043885 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.043705 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.043944 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.054600 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127694 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127770 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2239f1cd-f384-40df-9f71-a46caf290038-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127802 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127830 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz557\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-kube-api-access-bz557\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127877 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-config-data\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127910 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127939 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2239f1cd-f384-40df-9f71-a46caf290038-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127976 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.127992 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.128016 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.128043 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229215 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229282 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229304 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229338 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2239f1cd-f384-40df-9f71-a46caf290038-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229367 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229404 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz557\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-kube-api-access-bz557\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229476 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-config-data\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229520 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229554 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2239f1cd-f384-40df-9f71-a46caf290038-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229597 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.229620 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.230173 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.230353 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.232888 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.233794 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-config-data\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.234745 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.235979 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2239f1cd-f384-40df-9f71-a46caf290038-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.236382 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.238141 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2239f1cd-f384-40df-9f71-a46caf290038-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.240542 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2239f1cd-f384-40df-9f71-a46caf290038-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.240963 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.258743 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz557\" (UniqueName: \"kubernetes.io/projected/2239f1cd-f384-40df-9f71-a46caf290038-kube-api-access-bz557\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.275561 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2239f1cd-f384-40df-9f71-a46caf290038\") " pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.393572 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.448778 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" event={"ID":"4d610f18-f075-4fdd-9618-c807584a0d12","Type":"ContainerStarted","Data":"f223adfd127e099f079d9c57cf03da22cee668c8861dac91721c1feb6c571fac"} Jan 28 15:36:32 crc kubenswrapper[4656]: I0128 15:36:32.780218 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:36:32 crc kubenswrapper[4656]: W0128 15:36:32.845895 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07f26e32_4b43_4591_9ed2_6426a96e596e.slice/crio-7dc87b329510cdfa8b14270f027da873db175ebfbaed4b1590d5c1cc63026cba WatchSource:0}: Error finding container 7dc87b329510cdfa8b14270f027da873db175ebfbaed4b1590d5c1cc63026cba: Status 404 returned error can't find the container with id 7dc87b329510cdfa8b14270f027da873db175ebfbaed4b1590d5c1cc63026cba Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.025406 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:36:33 crc kubenswrapper[4656]: W0128 15:36:33.074916 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2239f1cd_f384_40df_9f71_a46caf290038.slice/crio-8e113ef39b9d7a5f8d98d7908e05b72256bce4e8b213a6ad67dc4229bb887959 WatchSource:0}: Error finding container 8e113ef39b9d7a5f8d98d7908e05b72256bce4e8b213a6ad67dc4229bb887959: Status 404 returned error can't find the container with id 8e113ef39b9d7a5f8d98d7908e05b72256bce4e8b213a6ad67dc4229bb887959 Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.202496 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.203875 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.203991 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.215629 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.216097 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gjcq4" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.216305 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.218678 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.219316 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.354927 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355287 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355319 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355355 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355388 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355430 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355507 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.355552 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbd5f\" (UniqueName: \"kubernetes.io/projected/8e41d89b-8943-4aec-9e33-00db569a2ce8-kube-api-access-qbd5f\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.456928 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457007 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457055 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbd5f\" (UniqueName: \"kubernetes.io/projected/8e41d89b-8943-4aec-9e33-00db569a2ce8-kube-api-access-qbd5f\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457158 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457284 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457315 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457379 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.457406 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.459354 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-default\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.459418 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-kolla-config\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.459544 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.459712 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8e41d89b-8943-4aec-9e33-00db569a2ce8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.464952 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.466369 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e41d89b-8943-4aec-9e33-00db569a2ce8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.467722 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e41d89b-8943-4aec-9e33-00db569a2ce8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.480407 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbd5f\" (UniqueName: \"kubernetes.io/projected/8e41d89b-8943-4aec-9e33-00db569a2ce8-kube-api-access-qbd5f\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.510235 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"8e41d89b-8943-4aec-9e33-00db569a2ce8\") " pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.552459 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.560468 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2239f1cd-f384-40df-9f71-a46caf290038","Type":"ContainerStarted","Data":"8e113ef39b9d7a5f8d98d7908e05b72256bce4e8b213a6ad67dc4229bb887959"} Jan 28 15:36:33 crc kubenswrapper[4656]: I0128 15:36:33.568439 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07f26e32-4b43-4591-9ed2-6426a96e596e","Type":"ContainerStarted","Data":"7dc87b329510cdfa8b14270f027da873db175ebfbaed4b1590d5c1cc63026cba"} Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.624649 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.901857 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.902820 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.917618 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.917902 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 15:36:34 crc kubenswrapper[4656]: I0128 15:36:34.918032 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dk9jh" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.010919 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-config-data\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.010976 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kolla-config\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.011024 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmdzt\" (UniqueName: \"kubernetes.io/projected/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kube-api-access-nmdzt\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.011076 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.011096 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.028614 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.112947 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.112995 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.113035 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-config-data\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.113058 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kolla-config\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.113100 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmdzt\" (UniqueName: \"kubernetes.io/projected/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kube-api-access-nmdzt\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.114786 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-config-data\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.115603 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kolla-config\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.146945 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.165828 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmdzt\" (UniqueName: \"kubernetes.io/projected/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-kube-api-access-nmdzt\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.169886 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25455cb-6f99-4958-b7bd-9fa56e45f6e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f25455cb-6f99-4958-b7bd-9fa56e45f6e1\") " pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.250631 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.679759 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e41d89b-8943-4aec-9e33-00db569a2ce8","Type":"ContainerStarted","Data":"3e1b56011b4053bdfce648887f94f25347b29dd451dd102b7d360e36a13a97e6"} Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.783640 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.790591 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.807689 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.820898 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.821122 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.821184 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8p555" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.821832 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.860277 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 15:36:35 crc kubenswrapper[4656]: W0128 15:36:35.925557 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf25455cb_6f99_4958_b7bd_9fa56e45f6e1.slice/crio-3fae74a8425e13bf3a9007e7c925bf149a0245b65c6c1a59f53e092a2b7f8cdc WatchSource:0}: Error finding container 3fae74a8425e13bf3a9007e7c925bf149a0245b65c6c1a59f53e092a2b7f8cdc: Status 404 returned error can't find the container with id 3fae74a8425e13bf3a9007e7c925bf149a0245b65c6c1a59f53e092a2b7f8cdc Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943587 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hvs4\" (UniqueName: \"kubernetes.io/projected/6a46bc21-63f0-461d-b33d-ec98cb059408-kube-api-access-6hvs4\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943650 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943727 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943758 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943951 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.943981 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.944021 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:35 crc kubenswrapper[4656]: I0128 15:36:35.944056 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.044905 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hvs4\" (UniqueName: \"kubernetes.io/projected/6a46bc21-63f0-461d-b33d-ec98cb059408-kube-api-access-6hvs4\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.044955 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045016 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045036 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045086 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045101 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045143 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045182 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.045883 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.046224 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.046450 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.047212 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.047333 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a46bc21-63f0-461d-b33d-ec98cb059408-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.051939 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.077069 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hvs4\" (UniqueName: \"kubernetes.io/projected/6a46bc21-63f0-461d-b33d-ec98cb059408-kube-api-access-6hvs4\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.083348 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a46bc21-63f0-461d-b33d-ec98cb059408-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.148745 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6a46bc21-63f0-461d-b33d-ec98cb059408\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.436629 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 15:36:36 crc kubenswrapper[4656]: I0128 15:36:36.731398 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f25455cb-6f99-4958-b7bd-9fa56e45f6e1","Type":"ContainerStarted","Data":"3fae74a8425e13bf3a9007e7c925bf149a0245b65c6c1a59f53e092a2b7f8cdc"} Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.275743 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.413750 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.416519 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.420614 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vn9b9" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.513013 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.514932 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp9sh\" (UniqueName: \"kubernetes.io/projected/7eff8e04-7afc-4a92-998f-db692ece65e7-kube-api-access-cp9sh\") pod \"kube-state-metrics-0\" (UID: \"7eff8e04-7afc-4a92-998f-db692ece65e7\") " pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.618679 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp9sh\" (UniqueName: \"kubernetes.io/projected/7eff8e04-7afc-4a92-998f-db692ece65e7-kube-api-access-cp9sh\") pod \"kube-state-metrics-0\" (UID: \"7eff8e04-7afc-4a92-998f-db692ece65e7\") " pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.671589 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp9sh\" (UniqueName: \"kubernetes.io/projected/7eff8e04-7afc-4a92-998f-db692ece65e7-kube-api-access-cp9sh\") pod \"kube-state-metrics-0\" (UID: \"7eff8e04-7afc-4a92-998f-db692ece65e7\") " pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.758612 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:36:37 crc kubenswrapper[4656]: I0128 15:36:37.825487 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a46bc21-63f0-461d-b33d-ec98cb059408","Type":"ContainerStarted","Data":"fbd72bea68a7781922434bc7f920fbe71b31713688c7e0bd6a5494f1c674275f"} Jan 28 15:36:38 crc kubenswrapper[4656]: I0128 15:36:38.776867 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:36:38 crc kubenswrapper[4656]: W0128 15:36:38.823542 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eff8e04_7afc_4a92_998f_db692ece65e7.slice/crio-7d1c788b0bdecd35d3e97f7e5df617fd25b751c56801dea0ee91f2799c8baf48 WatchSource:0}: Error finding container 7d1c788b0bdecd35d3e97f7e5df617fd25b751c56801dea0ee91f2799c8baf48: Status 404 returned error can't find the container with id 7d1c788b0bdecd35d3e97f7e5df617fd25b751c56801dea0ee91f2799c8baf48 Jan 28 15:36:39 crc kubenswrapper[4656]: I0128 15:36:39.871334 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7eff8e04-7afc-4a92-998f-db692ece65e7","Type":"ContainerStarted","Data":"7d1c788b0bdecd35d3e97f7e5df617fd25b751c56801dea0ee91f2799c8baf48"} Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.319876 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.324203 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.331634 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-lmk2h" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.333125 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.333356 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.333567 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.343345 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.343563 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7pppv"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.344627 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.353294 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.353785 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-c6cm2" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.354026 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.366518 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.377562 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-28hwk"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.386678 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.394863 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7pppv"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.403656 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-28hwk"] Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424223 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/681fa692-9a54-4d03-a31c-952409143c4f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424265 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424286 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-config\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424312 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424348 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjpl\" (UniqueName: \"kubernetes.io/projected/681fa692-9a54-4d03-a31c-952409143c4f-kube-api-access-vgjpl\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424420 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424520 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.424548 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525843 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525902 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5pg\" (UniqueName: \"kubernetes.io/projected/beab0392-2167-4283-97ae-12498c5d02c1-kube-api-access-px5pg\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525942 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gt2w\" (UniqueName: \"kubernetes.io/projected/4815f130-4106-456b-9bcb-b34536d9ddc9-kube-api-access-4gt2w\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525974 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.525990 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-ovn-controller-tls-certs\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526026 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526063 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beab0392-2167-4283-97ae-12498c5d02c1-scripts\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526123 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526147 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4815f130-4106-456b-9bcb-b34536d9ddc9-scripts\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526184 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526208 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-etc-ovs\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526229 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/681fa692-9a54-4d03-a31c-952409143c4f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526249 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526272 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-config\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526293 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-lib\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526311 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526331 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-log\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526367 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjpl\" (UniqueName: \"kubernetes.io/projected/681fa692-9a54-4d03-a31c-952409143c4f-kube-api-access-vgjpl\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526388 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-log-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526403 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-combined-ca-bundle\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.526423 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-run\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.530897 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.532711 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.533310 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/681fa692-9a54-4d03-a31c-952409143c4f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.536762 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681fa692-9a54-4d03-a31c-952409143c4f-config\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.569514 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.569614 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.570652 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681fa692-9a54-4d03-a31c-952409143c4f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.577007 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.589759 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjpl\" (UniqueName: \"kubernetes.io/projected/681fa692-9a54-4d03-a31c-952409143c4f-kube-api-access-vgjpl\") pod \"ovsdbserver-nb-0\" (UID: \"681fa692-9a54-4d03-a31c-952409143c4f\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633231 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-lib\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633275 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-log\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633310 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-log-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633331 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-combined-ca-bundle\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633354 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-run\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633374 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633394 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px5pg\" (UniqueName: \"kubernetes.io/projected/beab0392-2167-4283-97ae-12498c5d02c1-kube-api-access-px5pg\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633434 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gt2w\" (UniqueName: \"kubernetes.io/projected/4815f130-4106-456b-9bcb-b34536d9ddc9-kube-api-access-4gt2w\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633462 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633478 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-ovn-controller-tls-certs\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633506 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beab0392-2167-4283-97ae-12498c5d02c1-scripts\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633549 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4815f130-4106-456b-9bcb-b34536d9ddc9-scripts\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.633565 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-etc-ovs\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.634117 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-etc-ovs\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.634314 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-lib\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.634436 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-log\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.634508 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-log-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.635110 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/beab0392-2167-4283-97ae-12498c5d02c1-var-run\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.635173 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.642660 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-ovn-controller-tls-certs\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.642817 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4815f130-4106-456b-9bcb-b34536d9ddc9-var-run-ovn\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.654642 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4815f130-4106-456b-9bcb-b34536d9ddc9-scripts\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.663964 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4815f130-4106-456b-9bcb-b34536d9ddc9-combined-ca-bundle\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.664346 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/beab0392-2167-4283-97ae-12498c5d02c1-scripts\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.675099 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gt2w\" (UniqueName: \"kubernetes.io/projected/4815f130-4106-456b-9bcb-b34536d9ddc9-kube-api-access-4gt2w\") pod \"ovn-controller-7pppv\" (UID: \"4815f130-4106-456b-9bcb-b34536d9ddc9\") " pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.675352 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px5pg\" (UniqueName: \"kubernetes.io/projected/beab0392-2167-4283-97ae-12498c5d02c1-kube-api-access-px5pg\") pod \"ovn-controller-ovs-28hwk\" (UID: \"beab0392-2167-4283-97ae-12498c5d02c1\") " pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.715751 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.725324 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv" Jan 28 15:36:41 crc kubenswrapper[4656]: I0128 15:36:41.749378 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.300459 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.302100 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.304416 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wtkr6" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.305393 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.306204 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.310565 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.325957 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398333 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398736 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398810 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99d74\" (UniqueName: \"kubernetes.io/projected/da949f76-8013-4824-bda9-0656b43920b5-kube-api-access-99d74\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398861 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398885 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da949f76-8013-4824-bda9-0656b43920b5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.398975 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.399081 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-config\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.399138 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501281 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99d74\" (UniqueName: \"kubernetes.io/projected/da949f76-8013-4824-bda9-0656b43920b5-kube-api-access-99d74\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501348 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501378 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da949f76-8013-4824-bda9-0656b43920b5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501493 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501633 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-config\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501696 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501775 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.501808 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.504153 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-config\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.504468 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.505236 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da949f76-8013-4824-bda9-0656b43920b5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.506095 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da949f76-8013-4824-bda9-0656b43920b5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.508546 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.510840 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.517907 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da949f76-8013-4824-bda9-0656b43920b5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.529054 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99d74\" (UniqueName: \"kubernetes.io/projected/da949f76-8013-4824-bda9-0656b43920b5-kube-api-access-99d74\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.532319 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"da949f76-8013-4824-bda9-0656b43920b5\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:36:44 crc kubenswrapper[4656]: I0128 15:36:44.632591 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.890836 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.891779 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hvs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(6a46bc21-63f0-461d-b33d-ec98cb059408): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.893072 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="6a46bc21-63f0-461d-b33d-ec98cb059408" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.945111 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.945522 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbd5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(8e41d89b-8943-4aec-9e33-00db569a2ce8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:02 crc kubenswrapper[4656]: E0128 15:37:02.946816 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="8e41d89b-8943-4aec-9e33-00db569a2ce8" Jan 28 15:37:03 crc kubenswrapper[4656]: E0128 15:37:03.275064 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="8e41d89b-8943-4aec-9e33-00db569a2ce8" Jan 28 15:37:03 crc kubenswrapper[4656]: E0128 15:37:03.275630 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="6a46bc21-63f0-461d-b33d-ec98cb059408" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.173729 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.173981 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnqw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(07f26e32-4b43-4591-9ed2-6426a96e596e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.175428 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="07f26e32-4b43-4591-9ed2-6426a96e596e" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.282688 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="07f26e32-4b43-4591-9ed2-6426a96e596e" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.943736 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.944024 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5ddh5f9h655h57bh5d6h96h5bch59fh686h5c6h68ch56bh5f7h9bhfh5dch665hdchf5h584h696h74hc5h569h5fbh5bh54h94h64ch98h65hf9q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmdzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(f25455cb-6f99-4958-b7bd-9fa56e45f6e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.945363 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="f25455cb-6f99-4958-b7bd-9fa56e45f6e1" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.974103 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.974426 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz557,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2239f1cd-f384-40df-9f71-a46caf290038): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:04 crc kubenswrapper[4656]: E0128 15:37:04.976156 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2239f1cd-f384-40df-9f71-a46caf290038" Jan 28 15:37:05 crc kubenswrapper[4656]: E0128 15:37:05.290565 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2239f1cd-f384-40df-9f71-a46caf290038" Jan 28 15:37:05 crc kubenswrapper[4656]: E0128 15:37:05.294295 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="f25455cb-6f99-4958-b7bd-9fa56e45f6e1" Jan 28 15:37:10 crc kubenswrapper[4656]: I0128 15:37:10.857254 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-28hwk"] Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.869098 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.869655 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5v8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-59sjg_openstack(b8a88808-6249-4879-b857-55182475c4a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.870812 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" podUID="b8a88808-6249-4879-b857-55182475c4a5" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.881413 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.881598 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlsqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7lfkl_openstack(cc574140-1aed-42c6-baab-b39625a3ae3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.882862 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" podUID="cc574140-1aed-42c6-baab-b39625a3ae3b" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.885026 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.885225 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l874,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-vnkgc_openstack(c1211ebf-bf2b-422a-8bf6-8ff685d27325): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.886677 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.953842 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.953989 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztrdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-w2f4q_openstack(4d610f18-f075-4fdd-9618-c807584a0d12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:37:10 crc kubenswrapper[4656]: E0128 15:37:10.955521 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" podUID="4d610f18-f075-4fdd-9618-c807584a0d12" Jan 28 15:37:11 crc kubenswrapper[4656]: I0128 15:37:11.368625 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7pppv"] Jan 28 15:37:11 crc kubenswrapper[4656]: I0128 15:37:11.372725 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerStarted","Data":"43d4c3c043f701685e050b0ac171acad9522a575c3f66b091b921b3560befa27"} Jan 28 15:37:11 crc kubenswrapper[4656]: E0128 15:37:11.374823 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" podUID="4d610f18-f075-4fdd-9618-c807584a0d12" Jan 28 15:37:11 crc kubenswrapper[4656]: E0128 15:37:11.375747 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" Jan 28 15:37:11 crc kubenswrapper[4656]: I0128 15:37:11.654340 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.192713 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.192766 4656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.192922 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cp9sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(7eff8e04-7afc-4a92-998f-db692ece65e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.194552 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="7eff8e04-7afc-4a92-998f-db692ece65e7" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.242694 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.258080 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.352210 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:37:12 crc kubenswrapper[4656]: W0128 15:37:12.359120 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda949f76_8013_4824_bda9_0656b43920b5.slice/crio-31bbcf00d650f0e412864859f03bd0ede953c8294dbffa34782563edd52a7fa3 WatchSource:0}: Error finding container 31bbcf00d650f0e412864859f03bd0ede953c8294dbffa34782563edd52a7fa3: Status 404 returned error can't find the container with id 31bbcf00d650f0e412864859f03bd0ede953c8294dbffa34782563edd52a7fa3 Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.381315 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"da949f76-8013-4824-bda9-0656b43920b5","Type":"ContainerStarted","Data":"31bbcf00d650f0e412864859f03bd0ede953c8294dbffa34782563edd52a7fa3"} Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.382845 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"681fa692-9a54-4d03-a31c-952409143c4f","Type":"ContainerStarted","Data":"28fc1ab98d1cef9f48510102443528037c1500c7d9bd171fc47c3a1bd2c07074"} Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.386472 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" event={"ID":"b8a88808-6249-4879-b857-55182475c4a5","Type":"ContainerDied","Data":"f20fd4488a144f252fed0b07d5df3525000eb0b27f836032f05267a6bb4be1fc"} Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.386668 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-59sjg" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.392104 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" event={"ID":"cc574140-1aed-42c6-baab-b39625a3ae3b","Type":"ContainerDied","Data":"b9ce7457339510772a4afabfed4cb6250dc208ca5dccec3d956588e40ccb2fc7"} Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.392147 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7lfkl" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.395503 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv" event={"ID":"4815f130-4106-456b-9bcb-b34536d9ddc9","Type":"ContainerStarted","Data":"ee88d12a32cad34a07005af6c62f7bd52ba5f4beb93a283feec77fb9bfebd34b"} Jan 28 15:37:12 crc kubenswrapper[4656]: E0128 15:37:12.397317 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="7eff8e04-7afc-4a92-998f-db692ece65e7" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406323 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") pod \"b8a88808-6249-4879-b857-55182475c4a5\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406393 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") pod \"cc574140-1aed-42c6-baab-b39625a3ae3b\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406441 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") pod \"cc574140-1aed-42c6-baab-b39625a3ae3b\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406469 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") pod \"cc574140-1aed-42c6-baab-b39625a3ae3b\" (UID: \"cc574140-1aed-42c6-baab-b39625a3ae3b\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.406533 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") pod \"b8a88808-6249-4879-b857-55182475c4a5\" (UID: \"b8a88808-6249-4879-b857-55182475c4a5\") " Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.407184 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config" (OuterVolumeSpecName: "config") pod "b8a88808-6249-4879-b857-55182475c4a5" (UID: "b8a88808-6249-4879-b857-55182475c4a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.407229 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc574140-1aed-42c6-baab-b39625a3ae3b" (UID: "cc574140-1aed-42c6-baab-b39625a3ae3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.407623 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a88808-6249-4879-b857-55182475c4a5-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.407637 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.410364 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config" (OuterVolumeSpecName: "config") pod "cc574140-1aed-42c6-baab-b39625a3ae3b" (UID: "cc574140-1aed-42c6-baab-b39625a3ae3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.419319 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh" (OuterVolumeSpecName: "kube-api-access-zlsqh") pod "cc574140-1aed-42c6-baab-b39625a3ae3b" (UID: "cc574140-1aed-42c6-baab-b39625a3ae3b"). InnerVolumeSpecName "kube-api-access-zlsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.419698 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l" (OuterVolumeSpecName: "kube-api-access-z5v8l") pod "b8a88808-6249-4879-b857-55182475c4a5" (UID: "b8a88808-6249-4879-b857-55182475c4a5"). InnerVolumeSpecName "kube-api-access-z5v8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.509457 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc574140-1aed-42c6-baab-b39625a3ae3b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.509755 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlsqh\" (UniqueName: \"kubernetes.io/projected/cc574140-1aed-42c6-baab-b39625a3ae3b-kube-api-access-zlsqh\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.509770 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5v8l\" (UniqueName: \"kubernetes.io/projected/b8a88808-6249-4879-b857-55182475c4a5-kube-api-access-z5v8l\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.765845 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.773738 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-59sjg"] Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.815338 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:37:12 crc kubenswrapper[4656]: I0128 15:37:12.826689 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7lfkl"] Jan 28 15:37:13 crc kubenswrapper[4656]: I0128 15:37:13.181739 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a88808-6249-4879-b857-55182475c4a5" path="/var/lib/kubelet/pods/b8a88808-6249-4879-b857-55182475c4a5/volumes" Jan 28 15:37:13 crc kubenswrapper[4656]: I0128 15:37:13.182109 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc574140-1aed-42c6-baab-b39625a3ae3b" path="/var/lib/kubelet/pods/cc574140-1aed-42c6-baab-b39625a3ae3b/volumes" Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.450371 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerStarted","Data":"bc639ec2d6bf1314e43b21b17cf8d76449f8b8bb14d7e24a9cea2ec054f03c67"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.452909 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a46bc21-63f0-461d-b33d-ec98cb059408","Type":"ContainerStarted","Data":"4c09f6240f474151eb5040fa5896be8d322dfa0e215cd538c28ace35d81aa5ea"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.455436 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv" event={"ID":"4815f130-4106-456b-9bcb-b34536d9ddc9","Type":"ContainerStarted","Data":"262f743ac64b4d0404f708a0ba8cb5862c95bc3d9ab3b49e82db849f6219484e"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.456060 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7pppv" Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.458885 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"da949f76-8013-4824-bda9-0656b43920b5","Type":"ContainerStarted","Data":"0d6f9e6dedc080f1d2651b0859f2b139c4f4ae364eb80a6946a76e6cd8ec806b"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.460707 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f25455cb-6f99-4958-b7bd-9fa56e45f6e1","Type":"ContainerStarted","Data":"da52010fd6af69f869c9d92e361a078930f6fd989245b979cc18134154c605b6"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.461559 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.462963 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e41d89b-8943-4aec-9e33-00db569a2ce8","Type":"ContainerStarted","Data":"9b9fc19a1f3e858e5fc427e01bbeef6a977238f5023354a967caf3c887d5de0e"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.466560 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"681fa692-9a54-4d03-a31c-952409143c4f","Type":"ContainerStarted","Data":"b4a8cd4cc689d188c63069773647e15ab4be4770072365bcb118ab34ccd9d894"} Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.556531 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7pppv" podStartSLOduration=32.29407409 podStartE2EDuration="39.556509854s" podCreationTimestamp="2026-01-28 15:36:41 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.179973714 +0000 UTC m=+1122.688144518" lastFinishedPulling="2026-01-28 15:37:19.442409468 +0000 UTC m=+1129.950580282" observedRunningTime="2026-01-28 15:37:20.555592878 +0000 UTC m=+1131.063763682" watchObservedRunningTime="2026-01-28 15:37:20.556509854 +0000 UTC m=+1131.064680658" Jan 28 15:37:20 crc kubenswrapper[4656]: I0128 15:37:20.576773 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.073660924 podStartE2EDuration="46.576746331s" podCreationTimestamp="2026-01-28 15:36:34 +0000 UTC" firstStartedPulling="2026-01-28 15:36:35.937430477 +0000 UTC m=+1086.445601281" lastFinishedPulling="2026-01-28 15:37:19.440515884 +0000 UTC m=+1129.948686688" observedRunningTime="2026-01-28 15:37:20.575152715 +0000 UTC m=+1131.083323519" watchObservedRunningTime="2026-01-28 15:37:20.576746331 +0000 UTC m=+1131.084917155" Jan 28 15:37:21 crc kubenswrapper[4656]: I0128 15:37:21.484027 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07f26e32-4b43-4591-9ed2-6426a96e596e","Type":"ContainerStarted","Data":"74ff180262c50c4a408406c295f8f1bca87a9e6fc375807df80958cce55bb379"} Jan 28 15:37:21 crc kubenswrapper[4656]: I0128 15:37:21.488717 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2239f1cd-f384-40df-9f71-a46caf290038","Type":"ContainerStarted","Data":"9d149d3029d11945b504fb085462ea962ef4c3fcb25963157c8baca85a61ef3e"} Jan 28 15:37:21 crc kubenswrapper[4656]: I0128 15:37:21.491698 4656 generic.go:334] "Generic (PLEG): container finished" podID="beab0392-2167-4283-97ae-12498c5d02c1" containerID="bc639ec2d6bf1314e43b21b17cf8d76449f8b8bb14d7e24a9cea2ec054f03c67" exitCode=0 Jan 28 15:37:21 crc kubenswrapper[4656]: I0128 15:37:21.491752 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerDied","Data":"bc639ec2d6bf1314e43b21b17cf8d76449f8b8bb14d7e24a9cea2ec054f03c67"} Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.508264 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerStarted","Data":"6b67bfaa7406a14b181d3c9e2aff5e03553a3ebd87b0ce069924851826b27b2a"} Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.508736 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-28hwk" event={"ID":"beab0392-2167-4283-97ae-12498c5d02c1","Type":"ContainerStarted","Data":"f47e73927eb4df8d70ed8020d0fc10b1e6b7c8af854ef88e73c64b5f45dcef87"} Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.509606 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.509633 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:37:22 crc kubenswrapper[4656]: I0128 15:37:22.535054 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-28hwk" podStartSLOduration=33.033598718 podStartE2EDuration="41.535034618s" podCreationTimestamp="2026-01-28 15:36:41 +0000 UTC" firstStartedPulling="2026-01-28 15:37:10.941003109 +0000 UTC m=+1121.449173913" lastFinishedPulling="2026-01-28 15:37:19.442439009 +0000 UTC m=+1129.950609813" observedRunningTime="2026-01-28 15:37:22.532810125 +0000 UTC m=+1133.040980939" watchObservedRunningTime="2026-01-28 15:37:22.535034618 +0000 UTC m=+1133.043205422" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.235058 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vs8p5"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.248640 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.252346 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vs8p5"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.253366 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.256637 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.346962 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-config\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.347464 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.347659 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovn-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.347901 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovs-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.354108 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-combined-ca-bundle\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.354343 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc7jl\" (UniqueName: \"kubernetes.io/projected/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-kube-api-access-tc7jl\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.453118 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457103 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-combined-ca-bundle\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457237 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc7jl\" (UniqueName: \"kubernetes.io/projected/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-kube-api-access-tc7jl\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457312 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-config\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457358 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457404 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovn-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457464 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovs-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.457838 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovs-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.458205 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-ovn-rundir\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.459000 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-config\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.469011 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.469569 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-combined-ca-bundle\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.497050 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc7jl\" (UniqueName: \"kubernetes.io/projected/f39f654e-78ca-44c2-8c6a-a1de43a83d3f-kube-api-access-tc7jl\") pod \"ovn-controller-metrics-vs8p5\" (UID: \"f39f654e-78ca-44c2-8c6a-a1de43a83d3f\") " pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.523668 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.538613 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.571077 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.571342 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.571620 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.571721 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.579064 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.595899 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vs8p5" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.651024 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.695198 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.695281 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.695341 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.695374 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.696457 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.696779 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.697080 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.780329 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") pod \"dnsmasq-dns-7fd796d7df-9k966\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.886616 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:25 crc kubenswrapper[4656]: I0128 15:37:25.976499 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.044673 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.046402 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.050516 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.080935 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108390 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108464 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108504 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108537 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.108584 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209542 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209620 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209654 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209691 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.209744 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.210742 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.211152 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.211428 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.212057 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.240989 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") pod \"dnsmasq-dns-86db49b7ff-tmnbd\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.374232 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.712090 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.821844 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") pod \"4d610f18-f075-4fdd-9618-c807584a0d12\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.822080 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") pod \"4d610f18-f075-4fdd-9618-c807584a0d12\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.822185 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") pod \"4d610f18-f075-4fdd-9618-c807584a0d12\" (UID: \"4d610f18-f075-4fdd-9618-c807584a0d12\") " Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.822777 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config" (OuterVolumeSpecName: "config") pod "4d610f18-f075-4fdd-9618-c807584a0d12" (UID: "4d610f18-f075-4fdd-9618-c807584a0d12"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.823822 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d610f18-f075-4fdd-9618-c807584a0d12" (UID: "4d610f18-f075-4fdd-9618-c807584a0d12"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.828825 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc" (OuterVolumeSpecName: "kube-api-access-ztrdc") pod "4d610f18-f075-4fdd-9618-c807584a0d12" (UID: "4d610f18-f075-4fdd-9618-c807584a0d12"). InnerVolumeSpecName "kube-api-access-ztrdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.924147 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.924487 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d610f18-f075-4fdd-9618-c807584a0d12-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:26 crc kubenswrapper[4656]: I0128 15:37:26.924499 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztrdc\" (UniqueName: \"kubernetes.io/projected/4d610f18-f075-4fdd-9618-c807584a0d12-kube-api-access-ztrdc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.322219 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vs8p5"] Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.495012 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.506137 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.546041 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vs8p5" event={"ID":"f39f654e-78ca-44c2-8c6a-a1de43a83d3f","Type":"ContainerStarted","Data":"0459d82bf81cb9d98b851ce0dc33b37b4608f574bca7ce64efe31e15395bd08e"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.554303 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"da949f76-8013-4824-bda9-0656b43920b5","Type":"ContainerStarted","Data":"ed16f6956dab3a0e253c0c91804af4966b98ad23b7e35d3059968bdae4d7a2ea"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.559617 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" event={"ID":"4d610f18-f075-4fdd-9618-c807584a0d12","Type":"ContainerDied","Data":"f223adfd127e099f079d9c57cf03da22cee668c8861dac91721c1feb6c571fac"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.559713 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-w2f4q" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.565389 4656 generic.go:334] "Generic (PLEG): container finished" podID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" containerID="d7b43fdb99a2acae6cf8b5d6e5948315f1bd352f9c1ad850d550c3eb79c16b71" exitCode=0 Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.565464 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" event={"ID":"c1211ebf-bf2b-422a-8bf6-8ff685d27325","Type":"ContainerDied","Data":"d7b43fdb99a2acae6cf8b5d6e5948315f1bd352f9c1ad850d550c3eb79c16b71"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.568000 4656 generic.go:334] "Generic (PLEG): container finished" podID="8e41d89b-8943-4aec-9e33-00db569a2ce8" containerID="9b9fc19a1f3e858e5fc427e01bbeef6a977238f5023354a967caf3c887d5de0e" exitCode=0 Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.568074 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e41d89b-8943-4aec-9e33-00db569a2ce8","Type":"ContainerDied","Data":"9b9fc19a1f3e858e5fc427e01bbeef6a977238f5023354a967caf3c887d5de0e"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.576089 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"681fa692-9a54-4d03-a31c-952409143c4f","Type":"ContainerStarted","Data":"d2f3fa08a429d53e058b1424fcbba9563239b7eaf5932e71b6c86fc0117c4f6c"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.581593 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7eff8e04-7afc-4a92-998f-db692ece65e7","Type":"ContainerStarted","Data":"24a3867adf9a884e876f5084f1beea5148b255d639f393cdbb1fb2dd6f7422fe"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.581988 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.590313 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=30.104906027 podStartE2EDuration="44.59028653s" podCreationTimestamp="2026-01-28 15:36:43 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.363436603 +0000 UTC m=+1122.871607407" lastFinishedPulling="2026-01-28 15:37:26.848817106 +0000 UTC m=+1137.356987910" observedRunningTime="2026-01-28 15:37:27.586445791 +0000 UTC m=+1138.094616595" watchObservedRunningTime="2026-01-28 15:37:27.59028653 +0000 UTC m=+1138.098457334" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.605889 4656 generic.go:334] "Generic (PLEG): container finished" podID="6a46bc21-63f0-461d-b33d-ec98cb059408" containerID="4c09f6240f474151eb5040fa5896be8d322dfa0e215cd538c28ace35d81aa5ea" exitCode=0 Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.605941 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a46bc21-63f0-461d-b33d-ec98cb059408","Type":"ContainerDied","Data":"4c09f6240f474151eb5040fa5896be8d322dfa0e215cd538c28ace35d81aa5ea"} Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.689459 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.538677125 podStartE2EDuration="50.689427116s" podCreationTimestamp="2026-01-28 15:36:37 +0000 UTC" firstStartedPulling="2026-01-28 15:36:38.864804147 +0000 UTC m=+1089.372974951" lastFinishedPulling="2026-01-28 15:37:27.015554138 +0000 UTC m=+1137.523724942" observedRunningTime="2026-01-28 15:37:27.681975424 +0000 UTC m=+1138.190146228" watchObservedRunningTime="2026-01-28 15:37:27.689427116 +0000 UTC m=+1138.197597940" Jan 28 15:37:27 crc kubenswrapper[4656]: I0128 15:37:27.944877 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=33.296595262 podStartE2EDuration="47.944837606s" podCreationTimestamp="2026-01-28 15:36:40 +0000 UTC" firstStartedPulling="2026-01-28 15:37:12.198951955 +0000 UTC m=+1122.707122759" lastFinishedPulling="2026-01-28 15:37:26.847194299 +0000 UTC m=+1137.355365103" observedRunningTime="2026-01-28 15:37:27.93444949 +0000 UTC m=+1138.442620314" watchObservedRunningTime="2026-01-28 15:37:27.944837606 +0000 UTC m=+1138.453008420" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.030768 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.062327 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.087468 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-w2f4q"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.125707 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.128096 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.199242 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203680 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203737 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203825 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203863 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.203886 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305654 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305716 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305740 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305782 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.305809 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.309688 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.310145 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.310438 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.310731 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.355264 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") pod \"dnsmasq-dns-698758b865-jpj69\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.433118 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.465979 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.507968 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") pod \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.508938 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") pod \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.509141 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") pod \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\" (UID: \"c1211ebf-bf2b-422a-8bf6-8ff685d27325\") " Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.515588 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874" (OuterVolumeSpecName: "kube-api-access-6l874") pod "c1211ebf-bf2b-422a-8bf6-8ff685d27325" (UID: "c1211ebf-bf2b-422a-8bf6-8ff685d27325"). InnerVolumeSpecName "kube-api-access-6l874". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.529196 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1211ebf-bf2b-422a-8bf6-8ff685d27325" (UID: "c1211ebf-bf2b-422a-8bf6-8ff685d27325"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.547656 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config" (OuterVolumeSpecName: "config") pod "c1211ebf-bf2b-422a-8bf6-8ff685d27325" (UID: "c1211ebf-bf2b-422a-8bf6-8ff685d27325"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.613359 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.613378 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1211ebf-bf2b-422a-8bf6-8ff685d27325-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.613388 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l874\" (UniqueName: \"kubernetes.io/projected/c1211ebf-bf2b-422a-8bf6-8ff685d27325-kube-api-access-6l874\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.632801 4656 generic.go:334] "Generic (PLEG): container finished" podID="ad05b286-d454-4ab3-b003-cdff0f888c5e" containerID="a7cdf92129f7f9358ac4f1204fb73d1bb90fbf838c4a8a5616d619a3b25b6f8e" exitCode=0 Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.632865 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" event={"ID":"ad05b286-d454-4ab3-b003-cdff0f888c5e","Type":"ContainerDied","Data":"a7cdf92129f7f9358ac4f1204fb73d1bb90fbf838c4a8a5616d619a3b25b6f8e"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.632893 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" event={"ID":"ad05b286-d454-4ab3-b003-cdff0f888c5e","Type":"ContainerStarted","Data":"3f9bb71ffdcc1857a0ac29f30f2c89bfc60413b2c902a3d6f5b76c55bc1d2bff"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.635859 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a46bc21-63f0-461d-b33d-ec98cb059408","Type":"ContainerStarted","Data":"4d8d41bfb4a46b7e7fa072303556bee49e741b0f0cc9e550c2f36af9fc590dfb"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.662557 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vs8p5" event={"ID":"f39f654e-78ca-44c2-8c6a-a1de43a83d3f","Type":"ContainerStarted","Data":"86f2f5f423b174a4cc38f83285eed5ba3620dc56800babdf0cdeedb3b809983d"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.673459 4656 generic.go:334] "Generic (PLEG): container finished" podID="cadd7215-631f-469f-9d02-243efd40508a" containerID="0c99690cd99ff76bd306e4eb9caf6e9e98dd2c44cd01b18972ac6c91a1b608d2" exitCode=0 Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.673514 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerDied","Data":"0c99690cd99ff76bd306e4eb9caf6e9e98dd2c44cd01b18972ac6c91a1b608d2"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.673542 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerStarted","Data":"153a0d6ff43f7a95068acd9e82a4dfc8630051312d6191ff39fb5f25db0cc785"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.748324 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" event={"ID":"c1211ebf-bf2b-422a-8bf6-8ff685d27325","Type":"ContainerDied","Data":"63a5615d0933628808006a1d7c767190e6e914053a00c94d355b26e0c1287a0b"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.748943 4656 scope.go:117] "RemoveContainer" containerID="d7b43fdb99a2acae6cf8b5d6e5948315f1bd352f9c1ad850d550c3eb79c16b71" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.749212 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-vnkgc" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.867982 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8e41d89b-8943-4aec-9e33-00db569a2ce8","Type":"ContainerStarted","Data":"9e9c19d2591d020a6ca5e65812bb5f91cf0e7e3141a1ca94d0fe83dd22c65397"} Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.891896 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.715665373 podStartE2EDuration="55.891864709s" podCreationTimestamp="2026-01-28 15:36:33 +0000 UTC" firstStartedPulling="2026-01-28 15:36:37.431248536 +0000 UTC m=+1087.939419340" lastFinishedPulling="2026-01-28 15:37:19.607447862 +0000 UTC m=+1130.115618676" observedRunningTime="2026-01-28 15:37:28.851260952 +0000 UTC m=+1139.359431756" watchObservedRunningTime="2026-01-28 15:37:28.891864709 +0000 UTC m=+1139.400035513" Jan 28 15:37:28 crc kubenswrapper[4656]: I0128 15:37:28.919970 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vs8p5" podStartSLOduration=3.9199444100000003 podStartE2EDuration="3.91994441s" podCreationTimestamp="2026-01-28 15:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:28.897606983 +0000 UTC m=+1139.405777787" watchObservedRunningTime="2026-01-28 15:37:28.91994441 +0000 UTC m=+1139.428115214" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.015437 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.026755 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-vnkgc"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.051298 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.312657509 podStartE2EDuration="57.051270683s" podCreationTimestamp="2026-01-28 15:36:32 +0000 UTC" firstStartedPulling="2026-01-28 15:36:34.703783594 +0000 UTC m=+1085.211954398" lastFinishedPulling="2026-01-28 15:37:19.442396768 +0000 UTC m=+1129.950567572" observedRunningTime="2026-01-28 15:37:29.046000193 +0000 UTC m=+1139.554170997" watchObservedRunningTime="2026-01-28 15:37:29.051270683 +0000 UTC m=+1139.559441487" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.109153 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.205278 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d610f18-f075-4fdd-9618-c807584a0d12" path="/var/lib/kubelet/pods/4d610f18-f075-4fdd-9618-c807584a0d12/volumes" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.205800 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" path="/var/lib/kubelet/pods/c1211ebf-bf2b-422a-8bf6-8ff685d27325/volumes" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.274115 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.275884 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.275981 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.276541 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1211ebf-bf2b-422a-8bf6-8ff685d27325" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.309097 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.311770 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.315201 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.315427 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vbjkk" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.315573 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.315725 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.382746 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.427845 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") pod \"ad05b286-d454-4ab3-b003-cdff0f888c5e\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.427972 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") pod \"ad05b286-d454-4ab3-b003-cdff0f888c5e\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.428866 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") pod \"ad05b286-d454-4ab3-b003-cdff0f888c5e\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.428907 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") pod \"ad05b286-d454-4ab3-b003-cdff0f888c5e\" (UID: \"ad05b286-d454-4ab3-b003-cdff0f888c5e\") " Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429136 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwvsh\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-kube-api-access-jwvsh\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429199 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-cache\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429230 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a7b52a-dfe9-47b0-818e-48752d76068e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429265 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-lock\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429320 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.429433 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.469558 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4" (OuterVolumeSpecName: "kube-api-access-l5rh4") pod "ad05b286-d454-4ab3-b003-cdff0f888c5e" (UID: "ad05b286-d454-4ab3-b003-cdff0f888c5e"). InnerVolumeSpecName "kube-api-access-l5rh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.472709 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config" (OuterVolumeSpecName: "config") pod "ad05b286-d454-4ab3-b003-cdff0f888c5e" (UID: "ad05b286-d454-4ab3-b003-cdff0f888c5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.505833 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad05b286-d454-4ab3-b003-cdff0f888c5e" (UID: "ad05b286-d454-4ab3-b003-cdff0f888c5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.524368 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad05b286-d454-4ab3-b003-cdff0f888c5e" (UID: "ad05b286-d454-4ab3-b003-cdff0f888c5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531112 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531201 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwvsh\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-kube-api-access-jwvsh\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531236 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-cache\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531273 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a7b52a-dfe9-47b0-818e-48752d76068e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531301 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-lock\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531340 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531381 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531392 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5rh4\" (UniqueName: \"kubernetes.io/projected/ad05b286-d454-4ab3-b003-cdff0f888c5e-kube-api-access-l5rh4\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531403 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531411 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad05b286-d454-4ab3-b003-cdff0f888c5e-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.531724 4656 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.532860 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-cache\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.533067 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.533126 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.533412 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:30.033190009 +0000 UTC m=+1140.541360913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.540964 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19a7b52a-dfe9-47b0-818e-48752d76068e-lock\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.549990 4656 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 28 15:37:29 crc kubenswrapper[4656]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/cadd7215-631f-469f-9d02-243efd40508a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 28 15:37:29 crc kubenswrapper[4656]: > podSandboxID="153a0d6ff43f7a95068acd9e82a4dfc8630051312d6191ff39fb5f25db0cc785" Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.550250 4656 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 15:37:29 crc kubenswrapper[4656]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66n6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-tmnbd_openstack(cadd7215-631f-469f-9d02-243efd40508a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/cadd7215-631f-469f-9d02-243efd40508a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 28 15:37:29 crc kubenswrapper[4656]: > logger="UnhandledError" Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.551784 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/cadd7215-631f-469f-9d02-243efd40508a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" podUID="cadd7215-631f-469f-9d02-243efd40508a" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.557076 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a7b52a-dfe9-47b0-818e-48752d76068e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.567296 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwvsh\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-kube-api-access-jwvsh\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.575039 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.632755 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.632812 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.699208 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.717051 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.764671 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.823844 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mfbzm"] Jan 28 15:37:29 crc kubenswrapper[4656]: E0128 15:37:29.824423 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad05b286-d454-4ab3-b003-cdff0f888c5e" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.824449 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad05b286-d454-4ab3-b003-cdff0f888c5e" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.824691 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad05b286-d454-4ab3-b003-cdff0f888c5e" containerName="init" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.825588 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.835751 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838111 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838153 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838203 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838262 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838283 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838317 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.838351 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.840238 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.840266 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.852079 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mfbzm"] Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.875996 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" event={"ID":"ad05b286-d454-4ab3-b003-cdff0f888c5e","Type":"ContainerDied","Data":"3f9bb71ffdcc1857a0ac29f30f2c89bfc60413b2c902a3d6f5b76c55bc1d2bff"} Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.876283 4656 scope.go:117] "RemoveContainer" containerID="a7cdf92129f7f9358ac4f1204fb73d1bb90fbf838c4a8a5616d619a3b25b6f8e" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.876032 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9k966" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.877851 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerStarted","Data":"3f3fb240badabe1f5e9c9d62c574ea57eeefcfffb2c7d9f13df634359844b112"} Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.877952 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerStarted","Data":"51515a2b95a452c7a97ce3ad5d48cea215fd18e12f3ce81f0a7c8990597a60e1"} Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.878809 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.939998 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940091 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940362 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940598 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940643 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940790 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.940830 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.951629 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.953176 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.953444 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.959476 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.963369 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.966101 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:29 crc kubenswrapper[4656]: I0128 15:37:29.989638 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") pod \"swift-ring-rebalance-mfbzm\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.003525 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.033016 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.043352 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:30 crc kubenswrapper[4656]: E0128 15:37:30.046332 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:30 crc kubenswrapper[4656]: E0128 15:37:30.046367 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:30 crc kubenswrapper[4656]: E0128 15:37:30.046413 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:31.046396258 +0000 UTC m=+1141.554567062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.101219 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9k966"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.145865 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.148866 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.623418 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.627319 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.631372 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.631515 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.631781 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.631782 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5bn98" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.688487 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781104 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mfbzm"] Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781760 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-config\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781829 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s28s\" (UniqueName: \"kubernetes.io/projected/2fd90425-2113-4787-b18d-332f32cedd87-kube-api-access-5s28s\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781855 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-scripts\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781881 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2fd90425-2113-4787-b18d-332f32cedd87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781898 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.781959 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.782025 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.883881 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.883960 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884027 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-config\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884079 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s28s\" (UniqueName: \"kubernetes.io/projected/2fd90425-2113-4787-b18d-332f32cedd87-kube-api-access-5s28s\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884102 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-scripts\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884131 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2fd90425-2113-4787-b18d-332f32cedd87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884175 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.884741 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2fd90425-2113-4787-b18d-332f32cedd87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.885231 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-scripts\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.887958 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fd90425-2113-4787-b18d-332f32cedd87-config\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.892125 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.893474 4656 generic.go:334] "Generic (PLEG): container finished" podID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerID="3f3fb240badabe1f5e9c9d62c574ea57eeefcfffb2c7d9f13df634359844b112" exitCode=0 Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.893529 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerDied","Data":"3f3fb240badabe1f5e9c9d62c574ea57eeefcfffb2c7d9f13df634359844b112"} Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.901986 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerStarted","Data":"58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6"} Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.903788 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mfbzm" event={"ID":"5db57b48-1e29-4c73-b488-d6998232fce1","Type":"ContainerStarted","Data":"bc88037467be5dfdde8170edc5a85201ae8480da1a1918ca47b82aaa571d9b74"} Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.904966 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.907257 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd90425-2113-4787-b18d-332f32cedd87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.909871 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s28s\" (UniqueName: \"kubernetes.io/projected/2fd90425-2113-4787-b18d-332f32cedd87-kube-api-access-5s28s\") pod \"ovn-northd-0\" (UID: \"2fd90425-2113-4787-b18d-332f32cedd87\") " pod="openstack/ovn-northd-0" Jan 28 15:37:30 crc kubenswrapper[4656]: I0128 15:37:30.994772 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.087656 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:31 crc kubenswrapper[4656]: E0128 15:37:31.088947 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:31 crc kubenswrapper[4656]: E0128 15:37:31.089050 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:31 crc kubenswrapper[4656]: E0128 15:37:31.089154 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:33.08913172 +0000 UTC m=+1143.597302524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.226850 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad05b286-d454-4ab3-b003-cdff0f888c5e" path="/var/lib/kubelet/pods/ad05b286-d454-4ab3-b003-cdff0f888c5e/volumes" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.521711 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.913683 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerStarted","Data":"88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37"} Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.914382 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.920907 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2fd90425-2113-4787-b18d-332f32cedd87","Type":"ContainerStarted","Data":"552ce66e0bf904b0e7f11b0bdb02820d35a856f16b3a8bbbc9a3fe3e79c54749"} Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.922210 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.935736 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-jpj69" podStartSLOduration=3.935710309 podStartE2EDuration="3.935710309s" podCreationTimestamp="2026-01-28 15:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:31.93222457 +0000 UTC m=+1142.440395404" watchObservedRunningTime="2026-01-28 15:37:31.935710309 +0000 UTC m=+1142.443881113" Jan 28 15:37:31 crc kubenswrapper[4656]: I0128 15:37:31.983054 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" podStartSLOduration=6.983033148 podStartE2EDuration="6.983033148s" podCreationTimestamp="2026-01-28 15:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:31.978236671 +0000 UTC m=+1142.486407495" watchObservedRunningTime="2026-01-28 15:37:31.983033148 +0000 UTC m=+1142.491203952" Jan 28 15:37:33 crc kubenswrapper[4656]: I0128 15:37:33.149815 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:33 crc kubenswrapper[4656]: E0128 15:37:33.150094 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:33 crc kubenswrapper[4656]: E0128 15:37:33.150132 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:33 crc kubenswrapper[4656]: E0128 15:37:33.150217 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:37.150197917 +0000 UTC m=+1147.658368721 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:33 crc kubenswrapper[4656]: I0128 15:37:33.553515 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 15:37:33 crc kubenswrapper[4656]: I0128 15:37:33.553815 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 15:37:35 crc kubenswrapper[4656]: I0128 15:37:35.929275 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.026650 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.376369 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.437539 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.437925 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.555600 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.991795 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mfbzm" event={"ID":"5db57b48-1e29-4c73-b488-d6998232fce1","Type":"ContainerStarted","Data":"b93135778ed8920d21b7e918ab759c599172840deada7079b43816b5823801a7"} Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.995660 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2fd90425-2113-4787-b18d-332f32cedd87","Type":"ContainerStarted","Data":"5fce9a43853d7e0ead6a3ea8bbb3d36df9b015633215d8aa2ac6da3170d29ab6"} Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.995732 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2fd90425-2113-4787-b18d-332f32cedd87","Type":"ContainerStarted","Data":"083fc772d4ef7d96bd4029f24a651783b0497dacc5e8bb874aff4b49bc744cc2"} Jan 28 15:37:36 crc kubenswrapper[4656]: I0128 15:37:36.995872 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.018889 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-mfbzm" podStartSLOduration=2.8168953329999997 podStartE2EDuration="8.018867817s" podCreationTimestamp="2026-01-28 15:37:29 +0000 UTC" firstStartedPulling="2026-01-28 15:37:30.799400591 +0000 UTC m=+1141.307571395" lastFinishedPulling="2026-01-28 15:37:36.001373075 +0000 UTC m=+1146.509543879" observedRunningTime="2026-01-28 15:37:37.010627812 +0000 UTC m=+1147.518798616" watchObservedRunningTime="2026-01-28 15:37:37.018867817 +0000 UTC m=+1147.527038621" Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.118430 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.146428 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.675538998 podStartE2EDuration="7.146404412s" podCreationTimestamp="2026-01-28 15:37:30 +0000 UTC" firstStartedPulling="2026-01-28 15:37:31.529396429 +0000 UTC m=+1142.037567243" lastFinishedPulling="2026-01-28 15:37:36.000261833 +0000 UTC m=+1146.508432657" observedRunningTime="2026-01-28 15:37:37.042019457 +0000 UTC m=+1147.550190261" watchObservedRunningTime="2026-01-28 15:37:37.146404412 +0000 UTC m=+1147.654575216" Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.229297 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:37 crc kubenswrapper[4656]: E0128 15:37:37.230730 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:37 crc kubenswrapper[4656]: E0128 15:37:37.230764 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:37 crc kubenswrapper[4656]: E0128 15:37:37.230820 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:37:45.230795688 +0000 UTC m=+1155.738966562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:37 crc kubenswrapper[4656]: I0128 15:37:37.857268 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 15:37:38 crc kubenswrapper[4656]: I0128 15:37:38.467907 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:37:38 crc kubenswrapper[4656]: I0128 15:37:38.555261 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:38 crc kubenswrapper[4656]: I0128 15:37:38.555517 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="dnsmasq-dns" containerID="cri-o://58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6" gracePeriod=10 Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.025802 4656 generic.go:334] "Generic (PLEG): container finished" podID="cadd7215-631f-469f-9d02-243efd40508a" containerID="58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6" exitCode=0 Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.025867 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerDied","Data":"58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6"} Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.026244 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" event={"ID":"cadd7215-631f-469f-9d02-243efd40508a","Type":"ContainerDied","Data":"153a0d6ff43f7a95068acd9e82a4dfc8630051312d6191ff39fb5f25db0cc785"} Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.026282 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="153a0d6ff43f7a95068acd9e82a4dfc8630051312d6191ff39fb5f25db0cc785" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.071893 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166080 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166274 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166342 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166429 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.166455 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") pod \"cadd7215-631f-469f-9d02-243efd40508a\" (UID: \"cadd7215-631f-469f-9d02-243efd40508a\") " Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.181969 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d" (OuterVolumeSpecName: "kube-api-access-66n6d") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "kube-api-access-66n6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.216096 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.217345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.223233 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.233920 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config" (OuterVolumeSpecName: "config") pod "cadd7215-631f-469f-9d02-243efd40508a" (UID: "cadd7215-631f-469f-9d02-243efd40508a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269047 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269085 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269100 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269117 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cadd7215-631f-469f-9d02-243efd40508a-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:39 crc kubenswrapper[4656]: I0128 15:37:39.269131 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66n6d\" (UniqueName: \"kubernetes.io/projected/cadd7215-631f-469f-9d02-243efd40508a-kube-api-access-66n6d\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:40 crc kubenswrapper[4656]: I0128 15:37:40.032230 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tmnbd" Jan 28 15:37:40 crc kubenswrapper[4656]: I0128 15:37:40.096344 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:40 crc kubenswrapper[4656]: I0128 15:37:40.102518 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tmnbd"] Jan 28 15:37:41 crc kubenswrapper[4656]: I0128 15:37:41.180927 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cadd7215-631f-469f-9d02-243efd40508a" path="/var/lib/kubelet/pods/cadd7215-631f-469f-9d02-243efd40508a/volumes" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.354364 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:42 crc kubenswrapper[4656]: E0128 15:37:42.355856 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="dnsmasq-dns" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.355979 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="dnsmasq-dns" Jan 28 15:37:42 crc kubenswrapper[4656]: E0128 15:37:42.356129 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="init" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.356231 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="init" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.356595 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="cadd7215-631f-469f-9d02-243efd40508a" containerName="dnsmasq-dns" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.357540 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.360086 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.372526 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.581969 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.582154 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.684800 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.684903 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.686480 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.723324 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") pod \"root-account-create-update-44tss\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " pod="openstack/root-account-create-update-44tss" Jan 28 15:37:42 crc kubenswrapper[4656]: I0128 15:37:42.973952 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-44tss" Jan 28 15:37:43 crc kubenswrapper[4656]: I0128 15:37:43.582649 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:44 crc kubenswrapper[4656]: I0128 15:37:44.150905 4656 generic.go:334] "Generic (PLEG): container finished" podID="2e05cea6-abbf-4ab7-b46f-e07960967728" containerID="83b94c30ed93547fd506be79bae97518f5ab107ea9f58e554bb480d642b3aaf6" exitCode=0 Jan 28 15:37:44 crc kubenswrapper[4656]: I0128 15:37:44.151241 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-44tss" event={"ID":"2e05cea6-abbf-4ab7-b46f-e07960967728","Type":"ContainerDied","Data":"83b94c30ed93547fd506be79bae97518f5ab107ea9f58e554bb480d642b3aaf6"} Jan 28 15:37:44 crc kubenswrapper[4656]: I0128 15:37:44.151291 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-44tss" event={"ID":"2e05cea6-abbf-4ab7-b46f-e07960967728","Type":"ContainerStarted","Data":"3d34f085a12e9171d1883ba4de0c28f145fef35fb7524b8837c03d40ee2838ed"} Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.007006 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.008793 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.031372 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.129347 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.129695 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.180591 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.181834 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.190519 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.197713 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.235354 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.238070 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.238291 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.237482 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.238508 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.238627 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:37:45 crc kubenswrapper[4656]: E0128 15:37:45.238965 4656 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:37:45 crc kubenswrapper[4656]: E0128 15:37:45.239042 4656 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:37:45 crc kubenswrapper[4656]: E0128 15:37:45.239155 4656 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift podName:19a7b52a-dfe9-47b0-818e-48752d76068e nodeName:}" failed. No retries permitted until 2026-01-28 15:38:01.239133393 +0000 UTC m=+1171.747304197 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift") pod "swift-storage-0" (UID: "19a7b52a-dfe9-47b0-818e-48752d76068e") : configmap "swift-ring-files" not found Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.270343 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") pod \"keystone-db-create-gnzml\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.328370 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.340402 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.340442 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.341592 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.353323 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.354338 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.399717 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") pod \"keystone-93fe-account-create-update-r2snq\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.421222 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.504864 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.527704 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.528943 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.533977 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.558226 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.558607 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.568252 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.661107 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.661248 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.661283 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.661367 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.670833 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.676078 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.681190 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.683677 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.711502 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") pod \"placement-db-create-b64sc\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.763082 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.763122 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.763870 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.765917 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-44tss" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.792527 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b64sc" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.802303 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") pod \"placement-3d7e-account-create-update-6v56m\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.853880 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:37:45 crc kubenswrapper[4656]: E0128 15:37:45.854459 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e05cea6-abbf-4ab7-b46f-e07960967728" containerName="mariadb-account-create-update" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.854507 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e05cea6-abbf-4ab7-b46f-e07960967728" containerName="mariadb-account-create-update" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.854733 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e05cea6-abbf-4ab7-b46f-e07960967728" containerName="mariadb-account-create-update" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.855469 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.856558 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.858727 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.865379 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") pod \"2e05cea6-abbf-4ab7-b46f-e07960967728\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.865699 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") pod \"2e05cea6-abbf-4ab7-b46f-e07960967728\" (UID: \"2e05cea6-abbf-4ab7-b46f-e07960967728\") " Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.866628 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.866765 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.867340 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e05cea6-abbf-4ab7-b46f-e07960967728" (UID: "2e05cea6-abbf-4ab7-b46f-e07960967728"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.869990 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.891069 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt" (OuterVolumeSpecName: "kube-api-access-cbcnt") pod "2e05cea6-abbf-4ab7-b46f-e07960967728" (UID: "2e05cea6-abbf-4ab7-b46f-e07960967728"). InnerVolumeSpecName "kube-api-access-cbcnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968568 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968706 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968760 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968792 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968848 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbcnt\" (UniqueName: \"kubernetes.io/projected/2e05cea6-abbf-4ab7-b46f-e07960967728-kube-api-access-cbcnt\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.968859 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e05cea6-abbf-4ab7-b46f-e07960967728-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.969625 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:45 crc kubenswrapper[4656]: I0128 15:37:45.987770 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.019265 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") pod \"glance-db-create-2q82c\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " pod="openstack/glance-db-create-2q82c" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.071047 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.071314 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.073789 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.092718 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") pod \"glance-91ca-account-create-update-lzb9w\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.168789 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gnzml" event={"ID":"dfcf1378-d339-426c-bd01-36cd47172c37","Type":"ContainerStarted","Data":"1c3bb1d889fd4a796f70f3d1e9d53ba7f824ad2cfc48fac384daf6b22af00e87"} Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.170375 4656 generic.go:334] "Generic (PLEG): container finished" podID="5db57b48-1e29-4c73-b488-d6998232fce1" containerID="b93135778ed8920d21b7e918ab759c599172840deada7079b43816b5823801a7" exitCode=0 Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.170455 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mfbzm" event={"ID":"5db57b48-1e29-4c73-b488-d6998232fce1","Type":"ContainerDied","Data":"b93135778ed8920d21b7e918ab759c599172840deada7079b43816b5823801a7"} Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.179397 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-44tss" event={"ID":"2e05cea6-abbf-4ab7-b46f-e07960967728","Type":"ContainerDied","Data":"3d34f085a12e9171d1883ba4de0c28f145fef35fb7524b8837c03d40ee2838ed"} Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.179462 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d34f085a12e9171d1883ba4de0c28f145fef35fb7524b8837c03d40ee2838ed" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.179563 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-44tss" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.184665 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.263020 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:37:46 crc kubenswrapper[4656]: W0128 15:37:46.282401 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod099dbe89_7289_453c_84a2_f0de86b792cf.slice/crio-92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89 WatchSource:0}: Error finding container 92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89: Status 404 returned error can't find the container with id 92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89 Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.303028 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2q82c" Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.399194 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:37:46 crc kubenswrapper[4656]: W0128 15:37:46.409749 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4037dd9_6fe0_4f3a_9fca_4e716126a317.slice/crio-33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486 WatchSource:0}: Error finding container 33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486: Status 404 returned error can't find the container with id 33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486 Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.525963 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.766285 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:37:46 crc kubenswrapper[4656]: W0128 15:37:46.769865 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69e7d30a_cf9c_4aa5_880f_e214b8694082.slice/crio-55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480 WatchSource:0}: Error finding container 55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480: Status 404 returned error can't find the container with id 55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480 Jan 28 15:37:46 crc kubenswrapper[4656]: I0128 15:37:46.948377 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:37:46 crc kubenswrapper[4656]: W0128 15:37:46.967070 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32493d3d_ca02_451a_b1b0_51d4f82d54f3.slice/crio-0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9 WatchSource:0}: Error finding container 0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9: Status 404 returned error can't find the container with id 0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.194579 4656 generic.go:334] "Generic (PLEG): container finished" podID="69e7d30a-cf9c-4aa5-880f-e214b8694082" containerID="2b87d65405f263ac45d36349abb81b3c7c7ec205bf002789539910deb242d968" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.194668 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91ca-account-create-update-lzb9w" event={"ID":"69e7d30a-cf9c-4aa5-880f-e214b8694082","Type":"ContainerDied","Data":"2b87d65405f263ac45d36349abb81b3c7c7ec205bf002789539910deb242d968"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.194703 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91ca-account-create-update-lzb9w" event={"ID":"69e7d30a-cf9c-4aa5-880f-e214b8694082","Type":"ContainerStarted","Data":"55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.197297 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" containerID="6fe56555d0ff628c998638556e5ceea7a70562af71008d6c468acec1f6303046" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.197352 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b64sc" event={"ID":"e4037dd9-6fe0-4f3a-9fca-4e716126a317","Type":"ContainerDied","Data":"6fe56555d0ff628c998638556e5ceea7a70562af71008d6c468acec1f6303046"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.197373 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b64sc" event={"ID":"e4037dd9-6fe0-4f3a-9fca-4e716126a317","Type":"ContainerStarted","Data":"33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.201688 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2q82c" event={"ID":"32493d3d-ca02-451a-b1b0-51d4f82d54f3","Type":"ContainerStarted","Data":"3dde6ae74f513f5fb2d842f4ed02820c2b2f634f4c0cbc22132cce071c91ef58"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.201734 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2q82c" event={"ID":"32493d3d-ca02-451a-b1b0-51d4f82d54f3","Type":"ContainerStarted","Data":"0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.209577 4656 generic.go:334] "Generic (PLEG): container finished" podID="dfcf1378-d339-426c-bd01-36cd47172c37" containerID="54efb2e0c946cd8bb2a3f18919e54300b18699ccd32fa90c9ddedf9982d9a734" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.209643 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gnzml" event={"ID":"dfcf1378-d339-426c-bd01-36cd47172c37","Type":"ContainerDied","Data":"54efb2e0c946cd8bb2a3f18919e54300b18699ccd32fa90c9ddedf9982d9a734"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.213669 4656 generic.go:334] "Generic (PLEG): container finished" podID="c4a6af64-b874-4449-ae51-8902df8e9bdf" containerID="2365be7a0a671764daf45f822757fffcf6c88cb4fb34815a2047bee431512215" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.213755 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3d7e-account-create-update-6v56m" event={"ID":"c4a6af64-b874-4449-ae51-8902df8e9bdf","Type":"ContainerDied","Data":"2365be7a0a671764daf45f822757fffcf6c88cb4fb34815a2047bee431512215"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.213805 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3d7e-account-create-update-6v56m" event={"ID":"c4a6af64-b874-4449-ae51-8902df8e9bdf","Type":"ContainerStarted","Data":"6e890274229c2c612d49becf0adfbd07b308538b46af00a2122eafff404bd9ac"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.217734 4656 generic.go:334] "Generic (PLEG): container finished" podID="099dbe89-7289-453c-84a2-f0de86b792cf" containerID="ae2e87be8e9439b8c799f033078d2b5429004fda019d053d0cb63bfa319ab598" exitCode=0 Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.217778 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-93fe-account-create-update-r2snq" event={"ID":"099dbe89-7289-453c-84a2-f0de86b792cf","Type":"ContainerDied","Data":"ae2e87be8e9439b8c799f033078d2b5429004fda019d053d0cb63bfa319ab598"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.217836 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-93fe-account-create-update-r2snq" event={"ID":"099dbe89-7289-453c-84a2-f0de86b792cf","Type":"ContainerStarted","Data":"92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89"} Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.290797 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-2q82c" podStartSLOduration=2.290770631 podStartE2EDuration="2.290770631s" podCreationTimestamp="2026-01-28 15:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:37:47.282249278 +0000 UTC m=+1157.790420082" watchObservedRunningTime="2026-01-28 15:37:47.290770631 +0000 UTC m=+1157.798941445" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.610845 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719312 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719370 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719436 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719463 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719491 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719541 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.719684 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") pod \"5db57b48-1e29-4c73-b488-d6998232fce1\" (UID: \"5db57b48-1e29-4c73-b488-d6998232fce1\") " Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.722902 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.723993 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.726883 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh" (OuterVolumeSpecName: "kube-api-access-6hgrh") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "kube-api-access-6hgrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.729257 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.746197 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts" (OuterVolumeSpecName: "scripts") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.748050 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.752355 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5db57b48-1e29-4c73-b488-d6998232fce1" (UID: "5db57b48-1e29-4c73-b488-d6998232fce1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821645 4656 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821681 4656 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821691 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hgrh\" (UniqueName: \"kubernetes.io/projected/5db57b48-1e29-4c73-b488-d6998232fce1-kube-api-access-6hgrh\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821700 4656 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5db57b48-1e29-4c73-b488-d6998232fce1-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821708 4656 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5db57b48-1e29-4c73-b488-d6998232fce1-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821719 4656 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:47 crc kubenswrapper[4656]: I0128 15:37:47.821727 4656 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5db57b48-1e29-4c73-b488-d6998232fce1-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.227361 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mfbzm" event={"ID":"5db57b48-1e29-4c73-b488-d6998232fce1","Type":"ContainerDied","Data":"bc88037467be5dfdde8170edc5a85201ae8480da1a1918ca47b82aaa571d9b74"} Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.227403 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc88037467be5dfdde8170edc5a85201ae8480da1a1918ca47b82aaa571d9b74" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.227483 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mfbzm" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.229074 4656 generic.go:334] "Generic (PLEG): container finished" podID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" containerID="3dde6ae74f513f5fb2d842f4ed02820c2b2f634f4c0cbc22132cce071c91ef58" exitCode=0 Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.229546 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2q82c" event={"ID":"32493d3d-ca02-451a-b1b0-51d4f82d54f3","Type":"ContainerDied","Data":"3dde6ae74f513f5fb2d842f4ed02820c2b2f634f4c0cbc22132cce071c91ef58"} Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.574091 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.581026 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-44tss"] Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.752813 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.844782 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") pod \"69e7d30a-cf9c-4aa5-880f-e214b8694082\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.844958 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") pod \"69e7d30a-cf9c-4aa5-880f-e214b8694082\" (UID: \"69e7d30a-cf9c-4aa5-880f-e214b8694082\") " Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.845930 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69e7d30a-cf9c-4aa5-880f-e214b8694082" (UID: "69e7d30a-cf9c-4aa5-880f-e214b8694082"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.857537 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c" (OuterVolumeSpecName: "kube-api-access-dkg2c") pod "69e7d30a-cf9c-4aa5-880f-e214b8694082" (UID: "69e7d30a-cf9c-4aa5-880f-e214b8694082"). InnerVolumeSpecName "kube-api-access-dkg2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.932436 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b64sc" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.945214 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.948907 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkg2c\" (UniqueName: \"kubernetes.io/projected/69e7d30a-cf9c-4aa5-880f-e214b8694082-kube-api-access-dkg2c\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.948962 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e7d30a-cf9c-4aa5-880f-e214b8694082-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.967856 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:48 crc kubenswrapper[4656]: I0128 15:37:48.978112 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049575 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") pod \"099dbe89-7289-453c-84a2-f0de86b792cf\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049654 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") pod \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049687 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") pod \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\" (UID: \"e4037dd9-6fe0-4f3a-9fca-4e716126a317\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049707 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") pod \"099dbe89-7289-453c-84a2-f0de86b792cf\" (UID: \"099dbe89-7289-453c-84a2-f0de86b792cf\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049756 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") pod \"c4a6af64-b874-4449-ae51-8902df8e9bdf\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049811 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") pod \"c4a6af64-b874-4449-ae51-8902df8e9bdf\" (UID: \"c4a6af64-b874-4449-ae51-8902df8e9bdf\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049853 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") pod \"dfcf1378-d339-426c-bd01-36cd47172c37\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.049968 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") pod \"dfcf1378-d339-426c-bd01-36cd47172c37\" (UID: \"dfcf1378-d339-426c-bd01-36cd47172c37\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.051595 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "099dbe89-7289-453c-84a2-f0de86b792cf" (UID: "099dbe89-7289-453c-84a2-f0de86b792cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.051643 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4a6af64-b874-4449-ae51-8902df8e9bdf" (UID: "c4a6af64-b874-4449-ae51-8902df8e9bdf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.052026 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfcf1378-d339-426c-bd01-36cd47172c37" (UID: "dfcf1378-d339-426c-bd01-36cd47172c37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.052431 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4037dd9-6fe0-4f3a-9fca-4e716126a317" (UID: "e4037dd9-6fe0-4f3a-9fca-4e716126a317"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.055406 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp" (OuterVolumeSpecName: "kube-api-access-ch2rp") pod "099dbe89-7289-453c-84a2-f0de86b792cf" (UID: "099dbe89-7289-453c-84a2-f0de86b792cf"). InnerVolumeSpecName "kube-api-access-ch2rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.055784 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7" (OuterVolumeSpecName: "kube-api-access-787d7") pod "e4037dd9-6fe0-4f3a-9fca-4e716126a317" (UID: "e4037dd9-6fe0-4f3a-9fca-4e716126a317"). InnerVolumeSpecName "kube-api-access-787d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.055965 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4" (OuterVolumeSpecName: "kube-api-access-j7fx4") pod "dfcf1378-d339-426c-bd01-36cd47172c37" (UID: "dfcf1378-d339-426c-bd01-36cd47172c37"). InnerVolumeSpecName "kube-api-access-j7fx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.058175 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj" (OuterVolumeSpecName: "kube-api-access-tk9sj") pod "c4a6af64-b874-4449-ae51-8902df8e9bdf" (UID: "c4a6af64-b874-4449-ae51-8902df8e9bdf"). InnerVolumeSpecName "kube-api-access-tk9sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152471 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7fx4\" (UniqueName: \"kubernetes.io/projected/dfcf1378-d339-426c-bd01-36cd47172c37-kube-api-access-j7fx4\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152519 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099dbe89-7289-453c-84a2-f0de86b792cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152528 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4037dd9-6fe0-4f3a-9fca-4e716126a317-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152537 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-787d7\" (UniqueName: \"kubernetes.io/projected/e4037dd9-6fe0-4f3a-9fca-4e716126a317-kube-api-access-787d7\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152546 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch2rp\" (UniqueName: \"kubernetes.io/projected/099dbe89-7289-453c-84a2-f0de86b792cf-kube-api-access-ch2rp\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152554 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk9sj\" (UniqueName: \"kubernetes.io/projected/c4a6af64-b874-4449-ae51-8902df8e9bdf-kube-api-access-tk9sj\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152563 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a6af64-b874-4449-ae51-8902df8e9bdf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.152592 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcf1378-d339-426c-bd01-36cd47172c37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.183720 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e05cea6-abbf-4ab7-b46f-e07960967728" path="/var/lib/kubelet/pods/2e05cea6-abbf-4ab7-b46f-e07960967728/volumes" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.238979 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-93fe-account-create-update-r2snq" event={"ID":"099dbe89-7289-453c-84a2-f0de86b792cf","Type":"ContainerDied","Data":"92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.239027 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f699f1bd35f758042c6a6090381504729114735d6bc5a5bc42a505fcc60b89" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.239032 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93fe-account-create-update-r2snq" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.240791 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91ca-account-create-update-lzb9w" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.241335 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91ca-account-create-update-lzb9w" event={"ID":"69e7d30a-cf9c-4aa5-880f-e214b8694082","Type":"ContainerDied","Data":"55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.241365 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ef67f1f36c3603700891ada16e5143b56f1e614e30f050cf287dfa3dc84480" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.259669 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-b64sc" event={"ID":"e4037dd9-6fe0-4f3a-9fca-4e716126a317","Type":"ContainerDied","Data":"33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.259727 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33494c3e5ffcd32fb5cbae889e5368c3908e63325ddd15a2984bf7e3b05e1486" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.259778 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-b64sc" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.271543 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gnzml" event={"ID":"dfcf1378-d339-426c-bd01-36cd47172c37","Type":"ContainerDied","Data":"1c3bb1d889fd4a796f70f3d1e9d53ba7f824ad2cfc48fac384daf6b22af00e87"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.271678 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c3bb1d889fd4a796f70f3d1e9d53ba7f824ad2cfc48fac384daf6b22af00e87" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.271768 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gnzml" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.274946 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3d7e-account-create-update-6v56m" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.275663 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3d7e-account-create-update-6v56m" event={"ID":"c4a6af64-b874-4449-ae51-8902df8e9bdf","Type":"ContainerDied","Data":"6e890274229c2c612d49becf0adfbd07b308538b46af00a2122eafff404bd9ac"} Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.275692 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e890274229c2c612d49becf0adfbd07b308538b46af00a2122eafff404bd9ac" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.700633 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2q82c" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.763754 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") pod \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.763875 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") pod \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\" (UID: \"32493d3d-ca02-451a-b1b0-51d4f82d54f3\") " Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.764714 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32493d3d-ca02-451a-b1b0-51d4f82d54f3" (UID: "32493d3d-ca02-451a-b1b0-51d4f82d54f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.766953 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z" (OuterVolumeSpecName: "kube-api-access-jmf6z") pod "32493d3d-ca02-451a-b1b0-51d4f82d54f3" (UID: "32493d3d-ca02-451a-b1b0-51d4f82d54f3"). InnerVolumeSpecName "kube-api-access-jmf6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.866015 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmf6z\" (UniqueName: \"kubernetes.io/projected/32493d3d-ca02-451a-b1b0-51d4f82d54f3-kube-api-access-jmf6z\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:49 crc kubenswrapper[4656]: I0128 15:37:49.866316 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32493d3d-ca02-451a-b1b0-51d4f82d54f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:50 crc kubenswrapper[4656]: I0128 15:37:50.282463 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2q82c" event={"ID":"32493d3d-ca02-451a-b1b0-51d4f82d54f3","Type":"ContainerDied","Data":"0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9"} Jan 28 15:37:50 crc kubenswrapper[4656]: I0128 15:37:50.282499 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2q82c" Jan 28 15:37:50 crc kubenswrapper[4656]: I0128 15:37:50.282528 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d703ae6d277ea3fdee2b2fe8028c4b0a2846365383a931aaaec18b0d1b6b7a9" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.086849 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087216 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db57b48-1e29-4c73-b488-d6998232fce1" containerName="swift-ring-rebalance" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087230 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db57b48-1e29-4c73-b488-d6998232fce1" containerName="swift-ring-rebalance" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087246 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087252 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087267 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a6af64-b874-4449-ae51-8902df8e9bdf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087273 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a6af64-b874-4449-ae51-8902df8e9bdf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087288 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087294 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087300 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099dbe89-7289-453c-84a2-f0de86b792cf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087306 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="099dbe89-7289-453c-84a2-f0de86b792cf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087321 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcf1378-d339-426c-bd01-36cd47172c37" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087327 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcf1378-d339-426c-bd01-36cd47172c37" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: E0128 15:37:51.087337 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e7d30a-cf9c-4aa5-880f-e214b8694082" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087343 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e7d30a-cf9c-4aa5-880f-e214b8694082" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087495 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087506 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e7d30a-cf9c-4aa5-880f-e214b8694082" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087514 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087523 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db57b48-1e29-4c73-b488-d6998232fce1" containerName="swift-ring-rebalance" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087534 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a6af64-b874-4449-ae51-8902df8e9bdf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087541 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcf1378-d339-426c-bd01-36cd47172c37" containerName="mariadb-database-create" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.087558 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="099dbe89-7289-453c-84a2-f0de86b792cf" containerName="mariadb-account-create-update" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.088093 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.091042 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.091581 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vkdm4" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.103951 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.105392 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.192072 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.192187 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.192222 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.192347 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.293838 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.293943 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.294005 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.294040 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.302931 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.304194 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.304376 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.317931 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") pod \"glance-db-sync-842wg\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.403158 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-842wg" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.771741 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7pppv" podUID="4815f130-4106-456b-9bcb-b34536d9ddc9" containerName="ovn-controller" probeResult="failure" output=< Jan 28 15:37:51 crc kubenswrapper[4656]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 15:37:51 crc kubenswrapper[4656]: > Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.788425 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:37:51 crc kubenswrapper[4656]: I0128 15:37:51.949897 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:37:52 crc kubenswrapper[4656]: I0128 15:37:52.297492 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-842wg" event={"ID":"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c","Type":"ContainerStarted","Data":"504211dfa97dc92df86e8b70c17397351feacd00a97b729f5ccfa3e6d0b19223"} Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.309927 4656 generic.go:334] "Generic (PLEG): container finished" podID="07f26e32-4b43-4591-9ed2-6426a96e596e" containerID="74ff180262c50c4a408406c295f8f1bca87a9e6fc375807df80958cce55bb379" exitCode=0 Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.311366 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07f26e32-4b43-4591-9ed2-6426a96e596e","Type":"ContainerDied","Data":"74ff180262c50c4a408406c295f8f1bca87a9e6fc375807df80958cce55bb379"} Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.320601 4656 generic.go:334] "Generic (PLEG): container finished" podID="2239f1cd-f384-40df-9f71-a46caf290038" containerID="9d149d3029d11945b504fb085462ea962ef4c3fcb25963157c8baca85a61ef3e" exitCode=0 Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.320869 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2239f1cd-f384-40df-9f71-a46caf290038","Type":"ContainerDied","Data":"9d149d3029d11945b504fb085462ea962ef4c3fcb25963157c8baca85a61ef3e"} Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.631102 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.634425 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.653908 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.700547 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.750710 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.750894 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.852494 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.852600 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.853581 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.876949 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") pod \"root-account-create-update-slw4v\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:53 crc kubenswrapper[4656]: I0128 15:37:53.989716 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.331043 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07f26e32-4b43-4591-9ed2-6426a96e596e","Type":"ContainerStarted","Data":"eedc09f76fba9795c3755ad8270b6016ad3c1ad57fdd2985408c67e2cb4c8c21"} Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.331643 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.335559 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2239f1cd-f384-40df-9f71-a46caf290038","Type":"ContainerStarted","Data":"ef19d1fbea8f6cbc01d46ca96ef38ad5556579895d3feb64e4400c248ada14cf"} Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.336202 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.377022 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.677132747 podStartE2EDuration="1m24.376995823s" podCreationTimestamp="2026-01-28 15:36:30 +0000 UTC" firstStartedPulling="2026-01-28 15:36:32.85227531 +0000 UTC m=+1083.360446114" lastFinishedPulling="2026-01-28 15:37:19.552138386 +0000 UTC m=+1130.060309190" observedRunningTime="2026-01-28 15:37:54.359254297 +0000 UTC m=+1164.867425121" watchObservedRunningTime="2026-01-28 15:37:54.376995823 +0000 UTC m=+1164.885166627" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.400109 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.930224881 podStartE2EDuration="1m24.400083871s" podCreationTimestamp="2026-01-28 15:36:30 +0000 UTC" firstStartedPulling="2026-01-28 15:36:33.084599262 +0000 UTC m=+1083.592770066" lastFinishedPulling="2026-01-28 15:37:19.554458252 +0000 UTC m=+1130.062629056" observedRunningTime="2026-01-28 15:37:54.389316724 +0000 UTC m=+1164.897487558" watchObservedRunningTime="2026-01-28 15:37:54.400083871 +0000 UTC m=+1164.908254675" Jan 28 15:37:54 crc kubenswrapper[4656]: I0128 15:37:54.493997 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:37:54 crc kubenswrapper[4656]: W0128 15:37:54.506674 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1df812f9_220d_45f7_aa2a_c26196ef62e5.slice/crio-6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d WatchSource:0}: Error finding container 6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d: Status 404 returned error can't find the container with id 6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d Jan 28 15:37:55 crc kubenswrapper[4656]: I0128 15:37:55.345209 4656 generic.go:334] "Generic (PLEG): container finished" podID="1df812f9-220d-45f7-aa2a-c26196ef62e5" containerID="5f4d77dc94bab7589c74c06da6f776ffa81bcf97ec1f8a2199e44a457c390fb6" exitCode=0 Jan 28 15:37:55 crc kubenswrapper[4656]: I0128 15:37:55.345273 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-slw4v" event={"ID":"1df812f9-220d-45f7-aa2a-c26196ef62e5","Type":"ContainerDied","Data":"5f4d77dc94bab7589c74c06da6f776ffa81bcf97ec1f8a2199e44a457c390fb6"} Jan 28 15:37:55 crc kubenswrapper[4656]: I0128 15:37:55.345703 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-slw4v" event={"ID":"1df812f9-220d-45f7-aa2a-c26196ef62e5","Type":"ContainerStarted","Data":"6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d"} Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.875731 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.885335 4656 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7pppv" podUID="4815f130-4106-456b-9bcb-b34536d9ddc9" containerName="ovn-controller" probeResult="failure" output=< Jan 28 15:37:56 crc kubenswrapper[4656]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 15:37:56 crc kubenswrapper[4656]: > Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.889088 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-28hwk" Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.914821 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") pod \"1df812f9-220d-45f7-aa2a-c26196ef62e5\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.914898 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") pod \"1df812f9-220d-45f7-aa2a-c26196ef62e5\" (UID: \"1df812f9-220d-45f7-aa2a-c26196ef62e5\") " Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.918519 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1df812f9-220d-45f7-aa2a-c26196ef62e5" (UID: "1df812f9-220d-45f7-aa2a-c26196ef62e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:37:56 crc kubenswrapper[4656]: I0128 15:37:56.923594 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9" (OuterVolumeSpecName: "kube-api-access-c4bf9") pod "1df812f9-220d-45f7-aa2a-c26196ef62e5" (UID: "1df812f9-220d-45f7-aa2a-c26196ef62e5"). InnerVolumeSpecName "kube-api-access-c4bf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.017299 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4bf9\" (UniqueName: \"kubernetes.io/projected/1df812f9-220d-45f7-aa2a-c26196ef62e5-kube-api-access-c4bf9\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.017364 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df812f9-220d-45f7-aa2a-c26196ef62e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.135477 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:37:57 crc kubenswrapper[4656]: E0128 15:37:57.135946 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df812f9-220d-45f7-aa2a-c26196ef62e5" containerName="mariadb-account-create-update" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.135968 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df812f9-220d-45f7-aa2a-c26196ef62e5" containerName="mariadb-account-create-update" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.136195 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df812f9-220d-45f7-aa2a-c26196ef62e5" containerName="mariadb-account-create-update" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.136868 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.148770 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.165626 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.233502 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.234613 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.234886 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.234912 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.235118 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.235346 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336661 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336736 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336768 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336814 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336846 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.336867 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.337382 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.339676 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.340126 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.340376 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.340583 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.367094 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-slw4v" event={"ID":"1df812f9-220d-45f7-aa2a-c26196ef62e5","Type":"ContainerDied","Data":"6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d"} Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.367136 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6edbb4faf0e06a43e8ee09539fd5a45d26d278d4998a444175e4443ac00be48d" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.367231 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-slw4v" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.377388 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") pod \"ovn-controller-7pppv-config-tdl92\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.454772 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:37:57 crc kubenswrapper[4656]: I0128 15:37:57.841730 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:37:58 crc kubenswrapper[4656]: I0128 15:37:58.375346 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv-config-tdl92" event={"ID":"e622e2da-0115-43f6-bfe9-e3611becf555","Type":"ContainerStarted","Data":"4aa31b8f5619732fb1055bbe4acfbf9c0754a2dcbf2dcdb1cd6e163c262a3dc3"} Jan 28 15:37:59 crc kubenswrapper[4656]: I0128 15:37:59.383712 4656 generic.go:334] "Generic (PLEG): container finished" podID="e622e2da-0115-43f6-bfe9-e3611becf555" containerID="bdbd8b9752b666c133af6189feff3118817c7063c0dd053bb54b7a6f4c3b19d7" exitCode=0 Jan 28 15:37:59 crc kubenswrapper[4656]: I0128 15:37:59.383815 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv-config-tdl92" event={"ID":"e622e2da-0115-43f6-bfe9-e3611becf555","Type":"ContainerDied","Data":"bdbd8b9752b666c133af6189feff3118817c7063c0dd053bb54b7a6f4c3b19d7"} Jan 28 15:38:01 crc kubenswrapper[4656]: I0128 15:38:01.315333 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:38:01 crc kubenswrapper[4656]: I0128 15:38:01.329508 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19a7b52a-dfe9-47b0-818e-48752d76068e-etc-swift\") pod \"swift-storage-0\" (UID: \"19a7b52a-dfe9-47b0-818e-48752d76068e\") " pod="openstack/swift-storage-0" Jan 28 15:38:01 crc kubenswrapper[4656]: I0128 15:38:01.477081 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 15:38:01 crc kubenswrapper[4656]: I0128 15:38:01.777562 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7pppv" Jan 28 15:38:08 crc kubenswrapper[4656]: E0128 15:38:08.653268 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 28 15:38:08 crc kubenswrapper[4656]: E0128 15:38:08.654056 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6d9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-842wg_openstack(72cfa9c1-01ab-4c7e-80fa-f99e63b2602c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:38:08 crc kubenswrapper[4656]: E0128 15:38:08.655499 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-842wg" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.721963 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.920731 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921343 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921379 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run" (OuterVolumeSpecName: "var-run") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921398 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921482 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921550 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921582 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") pod \"e622e2da-0115-43f6-bfe9-e3611becf555\" (UID: \"e622e2da-0115-43f6-bfe9-e3611becf555\") " Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921879 4656 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.921925 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.922100 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.922198 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.923209 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts" (OuterVolumeSpecName: "scripts") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:08 crc kubenswrapper[4656]: I0128 15:38:08.929667 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7" (OuterVolumeSpecName: "kube-api-access-8ttw7") pod "e622e2da-0115-43f6-bfe9-e3611becf555" (UID: "e622e2da-0115-43f6-bfe9-e3611becf555"). InnerVolumeSpecName "kube-api-access-8ttw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024367 4656 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024408 4656 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024421 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ttw7\" (UniqueName: \"kubernetes.io/projected/e622e2da-0115-43f6-bfe9-e3611becf555-kube-api-access-8ttw7\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024435 4656 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e622e2da-0115-43f6-bfe9-e3611becf555-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.024446 4656 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e622e2da-0115-43f6-bfe9-e3611becf555-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.420384 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.509904 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7pppv-config-tdl92" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.510438 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7pppv-config-tdl92" event={"ID":"e622e2da-0115-43f6-bfe9-e3611becf555","Type":"ContainerDied","Data":"4aa31b8f5619732fb1055bbe4acfbf9c0754a2dcbf2dcdb1cd6e163c262a3dc3"} Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.510485 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa31b8f5619732fb1055bbe4acfbf9c0754a2dcbf2dcdb1cd6e163c262a3dc3" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.512808 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"64268af379d7f3c0e056d35cf34f8787a2f4ad9991806eb3fc8abaa1994d99ba"} Jan 28 15:38:09 crc kubenswrapper[4656]: E0128 15:38:09.514954 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-842wg" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.871391 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:38:09 crc kubenswrapper[4656]: I0128 15:38:09.880215 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7pppv-config-tdl92"] Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.188496 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e622e2da-0115-43f6-bfe9-e3611becf555" path="/var/lib/kubelet/pods/e622e2da-0115-43f6-bfe9-e3611becf555/volumes" Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.263732 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.263811 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.533757 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"192573c372aa8363c99db92dcea056662b0a52a0e76c9fcbb7561765dbe3a3ec"} Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.533814 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"6fdb704054c689a55fbae3248476332c7d200fc74fe7d027abe987373aa0b7ee"} Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.533837 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"0dc563523a3b05da09673142d0bcef433d0ed61d24e722cc9002f95d87bf2ca2"} Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.533846 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"84aa59c7985abd9f8596a471a0eedba27ee3d39d366f840ae4b611576b614d08"} Jan 28 15:38:11 crc kubenswrapper[4656]: I0128 15:38:11.958351 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:38:12 crc kubenswrapper[4656]: I0128 15:38:12.398376 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 15:38:13 crc kubenswrapper[4656]: I0128 15:38:13.565510 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"7df3ab25926ee1fd18f4c68b8f14da2c9abaced628c0abf4130c4058e472fb19"} Jan 28 15:38:13 crc kubenswrapper[4656]: I0128 15:38:13.566767 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"f802c419dee768a52cb8dc9d16fa88ca018ac59b8bc0a8bee1cc28d556402451"} Jan 28 15:38:13 crc kubenswrapper[4656]: I0128 15:38:13.566847 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"9fe2e05ce17cdad3e4d859f62c568981c7fa711d2bdbf7ff63766b1160c5c8fd"} Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.558731 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:38:14 crc kubenswrapper[4656]: E0128 15:38:14.559237 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e622e2da-0115-43f6-bfe9-e3611becf555" containerName="ovn-config" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.559259 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e622e2da-0115-43f6-bfe9-e3611becf555" containerName="ovn-config" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.559504 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e622e2da-0115-43f6-bfe9-e3611becf555" containerName="ovn-config" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.560136 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.617795 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.630073 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"5a0e6ef9b93913865f6a47f446281f75ce1121e19515c1259df35c0242397cb6"} Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.672775 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.674053 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.698105 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.724839 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.724967 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.725041 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.725088 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.783349 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.785098 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.788865 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.807800 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826133 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826226 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826293 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826333 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826377 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.826460 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.827480 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.827479 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.867813 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") pod \"cinder-db-create-4dvx6\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.884511 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") pod \"barbican-db-create-vdvd7\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.885225 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.911023 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.912311 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.928093 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.928158 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.928213 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.928255 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.929460 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.953064 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.955362 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.963490 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:38:14 crc kubenswrapper[4656]: I0128 15:38:14.965624 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.007957 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.027289 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.030551 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.030619 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.030643 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.030666 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.031432 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.057435 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") pod \"cinder-2c7a-account-create-update-s5pbw\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.068271 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") pod \"neutron-db-create-cc7jv\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.117115 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.132267 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.133455 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.133964 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.173435 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") pod \"barbican-a9b7-account-create-update-sf97j\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.203850 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.205415 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.219131 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.236955 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.338085 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.338202 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.357695 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.413514 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.449382 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.449617 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.460066 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.486864 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") pod \"neutron-3131-account-create-update-6kp5h\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.554059 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.569217 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.645192 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dvx6" event={"ID":"9aed3d60-8ff4-4b82-9bf8-7892dff01cff","Type":"ContainerStarted","Data":"d76b77f287ea9dec0ef0421a43060b12341366ebba74f0e217fcb0bea0847d57"} Jan 28 15:38:15 crc kubenswrapper[4656]: I0128 15:38:15.894888 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.021569 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.084938 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.282957 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.367997 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:38:16 crc kubenswrapper[4656]: W0128 15:38:16.465329 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf851bf_a272_4c0a_99a1_97c464d23a0d.slice/crio-e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc WatchSource:0}: Error finding container e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc: Status 404 returned error can't find the container with id e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc Jan 28 15:38:16 crc kubenswrapper[4656]: W0128 15:38:16.513029 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2455ddf8_bd67_4fe1_821e_0feda40d7da9.slice/crio-448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8 WatchSource:0}: Error finding container 448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8: Status 404 returned error can't find the container with id 448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8 Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.683520 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a9b7-account-create-update-sf97j" event={"ID":"1dc7809e-fb0b-4e26-ad3b-45aceb483265","Type":"ContainerStarted","Data":"49a1c134769a44dcbd496872d4d2a6b24f18b38e3711f4fc9046bd67e56974d5"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.686360 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3131-account-create-update-6kp5h" event={"ID":"cf48bf2a-2b8d-41ac-a712-9218091e8352","Type":"ContainerStarted","Data":"af8fdcd790caf1f39c9294b7636081ecd279be7604886dcdfc416f7645a5171e"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.687506 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdvd7" event={"ID":"2455ddf8-bd67-4fe1-821e-0feda40d7da9","Type":"ContainerStarted","Data":"448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.692853 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cc7jv" event={"ID":"ccf851bf-a272-4c0a-99a1-97c464d23a0d","Type":"ContainerStarted","Data":"e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.697422 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dvx6" event={"ID":"9aed3d60-8ff4-4b82-9bf8-7892dff01cff","Type":"ContainerStarted","Data":"00af603897747403f2999c8dd0ea82db99373770b631a4b1c83fa47765ccde4d"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.702598 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c7a-account-create-update-s5pbw" event={"ID":"e4d3108c-ea49-46b7-896a-6303b5651abc","Type":"ContainerStarted","Data":"37eaf6f481fc9e5652aa133ddddaacb0e32c7be580e917decf70aee486af0ae2"} Jan 28 15:38:16 crc kubenswrapper[4656]: I0128 15:38:16.715634 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-4dvx6" podStartSLOduration=2.715611071 podStartE2EDuration="2.715611071s" podCreationTimestamp="2026-01-28 15:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:16.710187792 +0000 UTC m=+1187.218358596" watchObservedRunningTime="2026-01-28 15:38:16.715611071 +0000 UTC m=+1187.223781875" Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.721011 4656 generic.go:334] "Generic (PLEG): container finished" podID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" containerID="00af603897747403f2999c8dd0ea82db99373770b631a4b1c83fa47765ccde4d" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.721450 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dvx6" event={"ID":"9aed3d60-8ff4-4b82-9bf8-7892dff01cff","Type":"ContainerDied","Data":"00af603897747403f2999c8dd0ea82db99373770b631a4b1c83fa47765ccde4d"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.729814 4656 generic.go:334] "Generic (PLEG): container finished" podID="e4d3108c-ea49-46b7-896a-6303b5651abc" containerID="02f441349595b2fea556f764337befa680c75f21f42833da737a52fe0e77200f" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.729893 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c7a-account-create-update-s5pbw" event={"ID":"e4d3108c-ea49-46b7-896a-6303b5651abc","Type":"ContainerDied","Data":"02f441349595b2fea556f764337befa680c75f21f42833da737a52fe0e77200f"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.743131 4656 generic.go:334] "Generic (PLEG): container finished" podID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" containerID="266a34fa5d7bf2b9b096e72f7c9b6721bd869c8c77436d0e440426fb23a20543" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.743368 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a9b7-account-create-update-sf97j" event={"ID":"1dc7809e-fb0b-4e26-ad3b-45aceb483265","Type":"ContainerDied","Data":"266a34fa5d7bf2b9b096e72f7c9b6721bd869c8c77436d0e440426fb23a20543"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.749791 4656 generic.go:334] "Generic (PLEG): container finished" podID="cf48bf2a-2b8d-41ac-a712-9218091e8352" containerID="8bb388635794bed8d36ed9b59e0ebe5aa23722a91994b61ca6e7ca5038083d1d" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.749924 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3131-account-create-update-6kp5h" event={"ID":"cf48bf2a-2b8d-41ac-a712-9218091e8352","Type":"ContainerDied","Data":"8bb388635794bed8d36ed9b59e0ebe5aa23722a91994b61ca6e7ca5038083d1d"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.765709 4656 generic.go:334] "Generic (PLEG): container finished" podID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" containerID="810dbe4d5afc6a6d7cc6184ae641765eef2d6efff2d1a416b9a80f9cc06da73c" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.765833 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdvd7" event={"ID":"2455ddf8-bd67-4fe1-821e-0feda40d7da9","Type":"ContainerDied","Data":"810dbe4d5afc6a6d7cc6184ae641765eef2d6efff2d1a416b9a80f9cc06da73c"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.786668 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"5d03e1f80bfb590b2e47db4f68a32fcc2fd42b2e3a7e3e18ad3f5f155bc2ceb8"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.786717 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"86e5928c7021ee3160169a65c05980429646128df05a0fbdc7fae6ebb6902c4e"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.786731 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"1ccf60d8a00f0fe980d3dbe53bea218bfc7b661bca85b6afb67d3b12e5964d6b"} Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.788432 4656 generic.go:334] "Generic (PLEG): container finished" podID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" containerID="d0d1b804843eff8d434244a805f92f46f9f0772ead6e44bf1c0d417856331fa2" exitCode=0 Jan 28 15:38:17 crc kubenswrapper[4656]: I0128 15:38:17.788466 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cc7jv" event={"ID":"ccf851bf-a272-4c0a-99a1-97c464d23a0d","Type":"ContainerDied","Data":"d0d1b804843eff8d434244a805f92f46f9f0772ead6e44bf1c0d417856331fa2"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.806097 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"51994d9c3feae3a7b614810d1e45291d749558636dc7d444ab98eae34b64f5d3"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.806467 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"7ba9f3532dc8869112da9f673ed36967c33a0598037780a6378c01917e816d53"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.806485 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"83919565bd6124f8fbe0ee8f5b1f822feb504fb6e3e76efaa6e313cfd8bfdd81"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.806496 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19a7b52a-dfe9-47b0-818e-48752d76068e","Type":"ContainerStarted","Data":"019b74f0ac49f4c42132100b1d22e149789a25facc2afc452c3afc5fa3e95e4b"} Jan 28 15:38:18 crc kubenswrapper[4656]: I0128 15:38:18.861690 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.65019853 podStartE2EDuration="50.86166428s" podCreationTimestamp="2026-01-28 15:37:28 +0000 UTC" firstStartedPulling="2026-01-28 15:38:09.444908631 +0000 UTC m=+1179.953079435" lastFinishedPulling="2026-01-28 15:38:16.656374371 +0000 UTC m=+1187.164545185" observedRunningTime="2026-01-28 15:38:18.852783217 +0000 UTC m=+1189.360954031" watchObservedRunningTime="2026-01-28 15:38:18.86166428 +0000 UTC m=+1189.369835084" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.266291 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.267894 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.273145 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.273482 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.304871 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317377 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317467 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317492 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317527 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317549 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.317616 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.418558 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") pod \"cf48bf2a-2b8d-41ac-a712-9218091e8352\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.418671 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") pod \"cf48bf2a-2b8d-41ac-a712-9218091e8352\" (UID: \"cf48bf2a-2b8d-41ac-a712-9218091e8352\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419034 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419068 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419088 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419103 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419121 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419141 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.419960 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.420310 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf48bf2a-2b8d-41ac-a712-9218091e8352" (UID: "cf48bf2a-2b8d-41ac-a712-9218091e8352"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.422112 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.423373 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.424069 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.424707 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.440649 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc" (OuterVolumeSpecName: "kube-api-access-q4jbc") pod "cf48bf2a-2b8d-41ac-a712-9218091e8352" (UID: "cf48bf2a-2b8d-41ac-a712-9218091e8352"). InnerVolumeSpecName "kube-api-access-q4jbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.453930 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") pod \"dnsmasq-dns-764c5664d7-cqdfp\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.478566 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.489873 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.511351 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.515808 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.519956 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") pod \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520026 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") pod \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\" (UID: \"1dc7809e-fb0b-4e26-ad3b-45aceb483265\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520102 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") pod \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520157 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") pod \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\" (UID: \"ccf851bf-a272-4c0a-99a1-97c464d23a0d\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520505 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4jbc\" (UniqueName: \"kubernetes.io/projected/cf48bf2a-2b8d-41ac-a712-9218091e8352-kube-api-access-q4jbc\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.520521 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf48bf2a-2b8d-41ac-a712-9218091e8352-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.521470 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccf851bf-a272-4c0a-99a1-97c464d23a0d" (UID: "ccf851bf-a272-4c0a-99a1-97c464d23a0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.521539 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.523610 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv" (OuterVolumeSpecName: "kube-api-access-wxcbv") pod "ccf851bf-a272-4c0a-99a1-97c464d23a0d" (UID: "ccf851bf-a272-4c0a-99a1-97c464d23a0d"). InnerVolumeSpecName "kube-api-access-wxcbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.523672 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1dc7809e-fb0b-4e26-ad3b-45aceb483265" (UID: "1dc7809e-fb0b-4e26-ad3b-45aceb483265"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.524361 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p" (OuterVolumeSpecName: "kube-api-access-dps9p") pod "1dc7809e-fb0b-4e26-ad3b-45aceb483265" (UID: "1dc7809e-fb0b-4e26-ad3b-45aceb483265"). InnerVolumeSpecName "kube-api-access-dps9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621213 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") pod \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621365 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") pod \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621392 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") pod \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\" (UID: \"9aed3d60-8ff4-4b82-9bf8-7892dff01cff\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621419 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") pod \"e4d3108c-ea49-46b7-896a-6303b5651abc\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621544 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") pod \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\" (UID: \"2455ddf8-bd67-4fe1-821e-0feda40d7da9\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621587 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") pod \"e4d3108c-ea49-46b7-896a-6303b5651abc\" (UID: \"e4d3108c-ea49-46b7-896a-6303b5651abc\") " Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.621997 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dps9p\" (UniqueName: \"kubernetes.io/projected/1dc7809e-fb0b-4e26-ad3b-45aceb483265-kube-api-access-dps9p\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.622021 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc7809e-fb0b-4e26-ad3b-45aceb483265-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.622032 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf851bf-a272-4c0a-99a1-97c464d23a0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.622045 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxcbv\" (UniqueName: \"kubernetes.io/projected/ccf851bf-a272-4c0a-99a1-97c464d23a0d-kube-api-access-wxcbv\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.622594 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4d3108c-ea49-46b7-896a-6303b5651abc" (UID: "e4d3108c-ea49-46b7-896a-6303b5651abc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.623041 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2455ddf8-bd67-4fe1-821e-0feda40d7da9" (UID: "2455ddf8-bd67-4fe1-821e-0feda40d7da9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.623665 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9aed3d60-8ff4-4b82-9bf8-7892dff01cff" (UID: "9aed3d60-8ff4-4b82-9bf8-7892dff01cff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.627117 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88" (OuterVolumeSpecName: "kube-api-access-bkr88") pod "2455ddf8-bd67-4fe1-821e-0feda40d7da9" (UID: "2455ddf8-bd67-4fe1-821e-0feda40d7da9"). InnerVolumeSpecName "kube-api-access-bkr88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.628113 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn" (OuterVolumeSpecName: "kube-api-access-dz4kn") pod "e4d3108c-ea49-46b7-896a-6303b5651abc" (UID: "e4d3108c-ea49-46b7-896a-6303b5651abc"). InnerVolumeSpecName "kube-api-access-dz4kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.630533 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68" (OuterVolumeSpecName: "kube-api-access-57z68") pod "9aed3d60-8ff4-4b82-9bf8-7892dff01cff" (UID: "9aed3d60-8ff4-4b82-9bf8-7892dff01cff"). InnerVolumeSpecName "kube-api-access-57z68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.643034 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725263 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkr88\" (UniqueName: \"kubernetes.io/projected/2455ddf8-bd67-4fe1-821e-0feda40d7da9-kube-api-access-bkr88\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725293 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d3108c-ea49-46b7-896a-6303b5651abc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725303 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2455ddf8-bd67-4fe1-821e-0feda40d7da9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725314 4656 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725322 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57z68\" (UniqueName: \"kubernetes.io/projected/9aed3d60-8ff4-4b82-9bf8-7892dff01cff-kube-api-access-57z68\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:19 crc kubenswrapper[4656]: I0128 15:38:19.725331 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz4kn\" (UniqueName: \"kubernetes.io/projected/e4d3108c-ea49-46b7-896a-6303b5651abc-kube-api-access-dz4kn\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.020129 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c7a-account-create-update-s5pbw" event={"ID":"e4d3108c-ea49-46b7-896a-6303b5651abc","Type":"ContainerDied","Data":"37eaf6f481fc9e5652aa133ddddaacb0e32c7be580e917decf70aee486af0ae2"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.020531 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37eaf6f481fc9e5652aa133ddddaacb0e32c7be580e917decf70aee486af0ae2" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.020628 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c7a-account-create-update-s5pbw" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.035816 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3131-account-create-update-6kp5h" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.035864 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3131-account-create-update-6kp5h" event={"ID":"cf48bf2a-2b8d-41ac-a712-9218091e8352","Type":"ContainerDied","Data":"af8fdcd790caf1f39c9294b7636081ecd279be7604886dcdfc416f7645a5171e"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.035927 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af8fdcd790caf1f39c9294b7636081ecd279be7604886dcdfc416f7645a5171e" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.043897 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdvd7" event={"ID":"2455ddf8-bd67-4fe1-821e-0feda40d7da9","Type":"ContainerDied","Data":"448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.043939 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdvd7" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.043943 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="448f5e3766be5e25db50996e41eb26b4bf0333c1878fda9c77f99f41f24674d8" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.053298 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cc7jv" event={"ID":"ccf851bf-a272-4c0a-99a1-97c464d23a0d","Type":"ContainerDied","Data":"e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.053336 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1baea593ef9082101f89950bf2975a161938bc6fc70b0ea34199032d50887dc" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.053406 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc7jv" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.063997 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dvx6" event={"ID":"9aed3d60-8ff4-4b82-9bf8-7892dff01cff","Type":"ContainerDied","Data":"d76b77f287ea9dec0ef0421a43060b12341366ebba74f0e217fcb0bea0847d57"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.064034 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d76b77f287ea9dec0ef0421a43060b12341366ebba74f0e217fcb0bea0847d57" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.064073 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dvx6" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.066934 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9b7-account-create-update-sf97j" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.067096 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a9b7-account-create-update-sf97j" event={"ID":"1dc7809e-fb0b-4e26-ad3b-45aceb483265","Type":"ContainerDied","Data":"49a1c134769a44dcbd496872d4d2a6b24f18b38e3711f4fc9046bd67e56974d5"} Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.067125 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49a1c134769a44dcbd496872d4d2a6b24f18b38e3711f4fc9046bd67e56974d5" Jan 28 15:38:20 crc kubenswrapper[4656]: I0128 15:38:20.272453 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:20 crc kubenswrapper[4656]: W0128 15:38:20.277260 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2a47883_aa14_4711_9dad_f6d38bcc706d.slice/crio-b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5 WatchSource:0}: Error finding container b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5: Status 404 returned error can't find the container with id b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5 Jan 28 15:38:21 crc kubenswrapper[4656]: I0128 15:38:21.076038 4656 generic.go:334] "Generic (PLEG): container finished" podID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerID="c32f049d51119c0894e074dc9838f05cf80306bb19c54ff776651984f935948f" exitCode=0 Jan 28 15:38:21 crc kubenswrapper[4656]: I0128 15:38:21.076585 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerDied","Data":"c32f049d51119c0894e074dc9838f05cf80306bb19c54ff776651984f935948f"} Jan 28 15:38:21 crc kubenswrapper[4656]: I0128 15:38:21.076639 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerStarted","Data":"b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5"} Jan 28 15:38:22 crc kubenswrapper[4656]: I0128 15:38:22.086313 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerStarted","Data":"d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1"} Jan 28 15:38:22 crc kubenswrapper[4656]: I0128 15:38:22.086691 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:22 crc kubenswrapper[4656]: I0128 15:38:22.110812 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" podStartSLOduration=3.110790236 podStartE2EDuration="3.110790236s" podCreationTimestamp="2026-01-28 15:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:22.105102401 +0000 UTC m=+1192.613273205" watchObservedRunningTime="2026-01-28 15:38:22.110790236 +0000 UTC m=+1192.618961030" Jan 28 15:38:26 crc kubenswrapper[4656]: I0128 15:38:26.119476 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-842wg" event={"ID":"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c","Type":"ContainerStarted","Data":"b5f01d437841162f7d723c8a3461fbed8e9508da364b69783ba5047744e009bb"} Jan 28 15:38:26 crc kubenswrapper[4656]: I0128 15:38:26.144887 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-842wg" podStartSLOduration=1.9414290090000001 podStartE2EDuration="35.144859862s" podCreationTimestamp="2026-01-28 15:37:51 +0000 UTC" firstStartedPulling="2026-01-28 15:37:51.95858719 +0000 UTC m=+1162.466758004" lastFinishedPulling="2026-01-28 15:38:25.162018043 +0000 UTC m=+1195.670188857" observedRunningTime="2026-01-28 15:38:26.141308084 +0000 UTC m=+1196.649478888" watchObservedRunningTime="2026-01-28 15:38:26.144859862 +0000 UTC m=+1196.653030676" Jan 28 15:38:29 crc kubenswrapper[4656]: I0128 15:38:29.645459 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:29 crc kubenswrapper[4656]: I0128 15:38:29.722747 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:38:29 crc kubenswrapper[4656]: I0128 15:38:29.722998 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-jpj69" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="dnsmasq-dns" containerID="cri-o://88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37" gracePeriod=10 Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.151841 4656 generic.go:334] "Generic (PLEG): container finished" podID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerID="88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37" exitCode=0 Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.151944 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerDied","Data":"88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37"} Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.152141 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jpj69" event={"ID":"c152e3b8-7b70-4580-988e-4cf053f87aa2","Type":"ContainerDied","Data":"51515a2b95a452c7a97ce3ad5d48cea215fd18e12f3ce81f0a7c8990597a60e1"} Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.152203 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51515a2b95a452c7a97ce3ad5d48cea215fd18e12f3ce81f0a7c8990597a60e1" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.157213 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269582 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269809 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269871 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269896 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.269953 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") pod \"c152e3b8-7b70-4580-988e-4cf053f87aa2\" (UID: \"c152e3b8-7b70-4580-988e-4cf053f87aa2\") " Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.284394 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578" (OuterVolumeSpecName: "kube-api-access-8d578") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "kube-api-access-8d578". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.313372 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.324532 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.336897 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config" (OuterVolumeSpecName: "config") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.337115 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c152e3b8-7b70-4580-988e-4cf053f87aa2" (UID: "c152e3b8-7b70-4580-988e-4cf053f87aa2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371327 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371377 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371391 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d578\" (UniqueName: \"kubernetes.io/projected/c152e3b8-7b70-4580-988e-4cf053f87aa2-kube-api-access-8d578\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371406 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:30 crc kubenswrapper[4656]: I0128 15:38:30.371414 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c152e3b8-7b70-4580-988e-4cf053f87aa2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:31 crc kubenswrapper[4656]: I0128 15:38:31.159790 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jpj69" Jan 28 15:38:31 crc kubenswrapper[4656]: I0128 15:38:31.210638 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:38:31 crc kubenswrapper[4656]: I0128 15:38:31.221641 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jpj69"] Jan 28 15:38:32 crc kubenswrapper[4656]: I0128 15:38:32.169375 4656 generic.go:334] "Generic (PLEG): container finished" podID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" containerID="b5f01d437841162f7d723c8a3461fbed8e9508da364b69783ba5047744e009bb" exitCode=0 Jan 28 15:38:32 crc kubenswrapper[4656]: I0128 15:38:32.169479 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-842wg" event={"ID":"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c","Type":"ContainerDied","Data":"b5f01d437841162f7d723c8a3461fbed8e9508da364b69783ba5047744e009bb"} Jan 28 15:38:33 crc kubenswrapper[4656]: I0128 15:38:33.179525 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" path="/var/lib/kubelet/pods/c152e3b8-7b70-4580-988e-4cf053f87aa2/volumes" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.542708 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-842wg" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.623609 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") pod \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.623749 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") pod \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.623778 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") pod \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.623811 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") pod \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\" (UID: \"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c\") " Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.642843 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" (UID: "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.643470 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d" (OuterVolumeSpecName: "kube-api-access-x6d9d") pod "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" (UID: "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c"). InnerVolumeSpecName "kube-api-access-x6d9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.652861 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" (UID: "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.672621 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data" (OuterVolumeSpecName: "config-data") pod "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" (UID: "72cfa9c1-01ab-4c7e-80fa-f99e63b2602c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.725143 4656 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.725186 4656 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.725196 4656 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:33.725206 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6d9d\" (UniqueName: \"kubernetes.io/projected/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c-kube-api-access-x6d9d\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.187784 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-842wg" event={"ID":"72cfa9c1-01ab-4c7e-80fa-f99e63b2602c","Type":"ContainerDied","Data":"504211dfa97dc92df86e8b70c17397351feacd00a97b729f5ccfa3e6d0b19223"} Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.188057 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="504211dfa97dc92df86e8b70c17397351feacd00a97b729f5ccfa3e6d0b19223" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.187852 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-842wg" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624034 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8fng8"] Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624469 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" containerName="glance-db-sync" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624494 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" containerName="glance-db-sync" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624511 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d3108c-ea49-46b7-896a-6303b5651abc" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624520 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d3108c-ea49-46b7-896a-6303b5651abc" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624532 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf48bf2a-2b8d-41ac-a712-9218091e8352" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624539 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf48bf2a-2b8d-41ac-a712-9218091e8352" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624553 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624562 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624578 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624586 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624599 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="dnsmasq-dns" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624606 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="dnsmasq-dns" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624640 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="init" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624649 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="init" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624660 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624667 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: E0128 15:38:34.624683 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624698 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624945 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624976 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624986 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d3108c-ea49-46b7-896a-6303b5651abc" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.624996 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="c152e3b8-7b70-4580-988e-4cf053f87aa2" containerName="dnsmasq-dns" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.625008 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" containerName="glance-db-sync" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.625027 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.625040 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf48bf2a-2b8d-41ac-a712-9218091e8352" containerName="mariadb-account-create-update" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.625052 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" containerName="mariadb-database-create" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.626173 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.662298 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8fng8"] Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794370 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794467 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794542 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794584 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-config\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794649 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.794706 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cr8h\" (UniqueName: \"kubernetes.io/projected/0d7bd9aa-43b5-4819-9bef-a61574670ba6-kube-api-access-7cr8h\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.895989 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896054 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896113 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896146 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-config\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896242 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.896414 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cr8h\" (UniqueName: \"kubernetes.io/projected/0d7bd9aa-43b5-4819-9bef-a61574670ba6-kube-api-access-7cr8h\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897332 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897349 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897392 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-config\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897510 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.897728 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d7bd9aa-43b5-4819-9bef-a61574670ba6-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.917177 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cr8h\" (UniqueName: \"kubernetes.io/projected/0d7bd9aa-43b5-4819-9bef-a61574670ba6-kube-api-access-7cr8h\") pod \"dnsmasq-dns-74f6bcbc87-8fng8\" (UID: \"0d7bd9aa-43b5-4819-9bef-a61574670ba6\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:34 crc kubenswrapper[4656]: I0128 15:38:34.943756 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:35 crc kubenswrapper[4656]: I0128 15:38:35.455253 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8fng8"] Jan 28 15:38:35 crc kubenswrapper[4656]: W0128 15:38:35.455316 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d7bd9aa_43b5_4819_9bef_a61574670ba6.slice/crio-42f926b63c13038ba967b33cf82f2a497052b6a1c4148c749b3d09afae66ae61 WatchSource:0}: Error finding container 42f926b63c13038ba967b33cf82f2a497052b6a1c4148c749b3d09afae66ae61: Status 404 returned error can't find the container with id 42f926b63c13038ba967b33cf82f2a497052b6a1c4148c749b3d09afae66ae61 Jan 28 15:38:36 crc kubenswrapper[4656]: I0128 15:38:36.208039 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" event={"ID":"0d7bd9aa-43b5-4819-9bef-a61574670ba6","Type":"ContainerStarted","Data":"42f926b63c13038ba967b33cf82f2a497052b6a1c4148c749b3d09afae66ae61"} Jan 28 15:38:37 crc kubenswrapper[4656]: I0128 15:38:37.220406 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" event={"ID":"0d7bd9aa-43b5-4819-9bef-a61574670ba6","Type":"ContainerStarted","Data":"9315b2ae2af35b8d1c9ebe22d5f2551303838ed4de4faf1ae9b74ba2e0e4fbd1"} Jan 28 15:38:38 crc kubenswrapper[4656]: I0128 15:38:38.230580 4656 generic.go:334] "Generic (PLEG): container finished" podID="0d7bd9aa-43b5-4819-9bef-a61574670ba6" containerID="9315b2ae2af35b8d1c9ebe22d5f2551303838ed4de4faf1ae9b74ba2e0e4fbd1" exitCode=0 Jan 28 15:38:38 crc kubenswrapper[4656]: I0128 15:38:38.230631 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" event={"ID":"0d7bd9aa-43b5-4819-9bef-a61574670ba6","Type":"ContainerDied","Data":"9315b2ae2af35b8d1c9ebe22d5f2551303838ed4de4faf1ae9b74ba2e0e4fbd1"} Jan 28 15:38:39 crc kubenswrapper[4656]: I0128 15:38:39.245040 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" event={"ID":"0d7bd9aa-43b5-4819-9bef-a61574670ba6","Type":"ContainerStarted","Data":"d090df0d1ab62b9ad7988b6f8160bc07dccc14adaccb0b2d23c9f7106b85dd58"} Jan 28 15:38:39 crc kubenswrapper[4656]: I0128 15:38:39.245596 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:39 crc kubenswrapper[4656]: I0128 15:38:39.272368 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" podStartSLOduration=5.272346077 podStartE2EDuration="5.272346077s" podCreationTimestamp="2026-01-28 15:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:38:39.268129262 +0000 UTC m=+1209.776300056" watchObservedRunningTime="2026-01-28 15:38:39.272346077 +0000 UTC m=+1209.780516891" Jan 28 15:38:41 crc kubenswrapper[4656]: I0128 15:38:41.263808 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:38:41 crc kubenswrapper[4656]: I0128 15:38:41.264186 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:38:44 crc kubenswrapper[4656]: I0128 15:38:44.945450 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-8fng8" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.003884 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.004191 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="dnsmasq-dns" containerID="cri-o://d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1" gracePeriod=10 Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.303432 4656 generic.go:334] "Generic (PLEG): container finished" podID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerID="d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1" exitCode=0 Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.303507 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerDied","Data":"d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1"} Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.523277 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674620 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674689 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674735 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674806 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674923 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.674982 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") pod \"a2a47883-aa14-4711-9dad-f6d38bcc706d\" (UID: \"a2a47883-aa14-4711-9dad-f6d38bcc706d\") " Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.680822 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx" (OuterVolumeSpecName: "kube-api-access-xqvqx") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "kube-api-access-xqvqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.726677 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.734946 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.740148 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config" (OuterVolumeSpecName: "config") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.740721 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.744555 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a2a47883-aa14-4711-9dad-f6d38bcc706d" (UID: "a2a47883-aa14-4711-9dad-f6d38bcc706d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.777905 4656 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.777978 4656 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.777992 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.778467 4656 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.778483 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqvqx\" (UniqueName: \"kubernetes.io/projected/a2a47883-aa14-4711-9dad-f6d38bcc706d-kube-api-access-xqvqx\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:45 crc kubenswrapper[4656]: I0128 15:38:45.778491 4656 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a47883-aa14-4711-9dad-f6d38bcc706d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.313589 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" event={"ID":"a2a47883-aa14-4711-9dad-f6d38bcc706d","Type":"ContainerDied","Data":"b9f02976b2b0ef03be91cf5c1b06ce1bb110d463510277fa0a060cd45307c5c5"} Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.313682 4656 scope.go:117] "RemoveContainer" containerID="d1c6cf74fc397e249939755bb43aa2db7be45a672b9ad5c5703c67ba5d1c06a1" Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.313909 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cqdfp" Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.351345 4656 scope.go:117] "RemoveContainer" containerID="c32f049d51119c0894e074dc9838f05cf80306bb19c54ff776651984f935948f" Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.359298 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:46 crc kubenswrapper[4656]: I0128 15:38:46.372475 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cqdfp"] Jan 28 15:38:47 crc kubenswrapper[4656]: I0128 15:38:47.183444 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" path="/var/lib/kubelet/pods/a2a47883-aa14-4711-9dad-f6d38bcc706d/volumes" Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.263878 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.264685 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.264759 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.265740 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.265832 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441" gracePeriod=600 Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.528127 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441" exitCode=0 Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.528193 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441"} Jan 28 15:39:11 crc kubenswrapper[4656]: I0128 15:39:11.528271 4656 scope.go:117] "RemoveContainer" containerID="45af716abfac826ba3a4dfbcd1d22436c5270721d55f11ffa5d85cae3cd0840f" Jan 28 15:39:12 crc kubenswrapper[4656]: I0128 15:39:12.536658 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98"} Jan 28 15:41:11 crc kubenswrapper[4656]: I0128 15:41:11.263831 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:41:11 crc kubenswrapper[4656]: I0128 15:41:11.264517 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.101998 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:41:40 crc kubenswrapper[4656]: E0128 15:41:40.102967 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="init" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.102989 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="init" Jan 28 15:41:40 crc kubenswrapper[4656]: E0128 15:41:40.103015 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="dnsmasq-dns" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.103024 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="dnsmasq-dns" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.105881 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a47883-aa14-4711-9dad-f6d38bcc706d" containerName="dnsmasq-dns" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.107817 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.136536 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.166881 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.167398 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.167597 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.271843 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.272662 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.273508 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.273753 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.274189 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.293477 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") pod \"redhat-operators-hn7cf\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:40 crc kubenswrapper[4656]: I0128 15:41:40.476366 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:41 crc kubenswrapper[4656]: I0128 15:41:41.022994 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:41:41 crc kubenswrapper[4656]: I0128 15:41:41.081802 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerStarted","Data":"bb25b8fc8ba237291897b37ef6bca9be8ae874b0b683f20ca0a47026007eccec"} Jan 28 15:41:41 crc kubenswrapper[4656]: I0128 15:41:41.264094 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:41:41 crc kubenswrapper[4656]: I0128 15:41:41.264256 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:41:42 crc kubenswrapper[4656]: I0128 15:41:42.092124 4656 generic.go:334] "Generic (PLEG): container finished" podID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerID="7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59" exitCode=0 Jan 28 15:41:42 crc kubenswrapper[4656]: I0128 15:41:42.092191 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerDied","Data":"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59"} Jan 28 15:41:42 crc kubenswrapper[4656]: I0128 15:41:42.095128 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:41:44 crc kubenswrapper[4656]: I0128 15:41:44.111905 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerStarted","Data":"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473"} Jan 28 15:41:45 crc kubenswrapper[4656]: I0128 15:41:45.121918 4656 generic.go:334] "Generic (PLEG): container finished" podID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerID="929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473" exitCode=0 Jan 28 15:41:45 crc kubenswrapper[4656]: I0128 15:41:45.121960 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerDied","Data":"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473"} Jan 28 15:41:50 crc kubenswrapper[4656]: I0128 15:41:50.164779 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerStarted","Data":"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570"} Jan 28 15:41:50 crc kubenswrapper[4656]: I0128 15:41:50.190793 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hn7cf" podStartSLOduration=2.537875113 podStartE2EDuration="10.190764837s" podCreationTimestamp="2026-01-28 15:41:40 +0000 UTC" firstStartedPulling="2026-01-28 15:41:42.094721099 +0000 UTC m=+1392.602891903" lastFinishedPulling="2026-01-28 15:41:49.747610823 +0000 UTC m=+1400.255781627" observedRunningTime="2026-01-28 15:41:50.180691167 +0000 UTC m=+1400.688861981" watchObservedRunningTime="2026-01-28 15:41:50.190764837 +0000 UTC m=+1400.698935641" Jan 28 15:41:50 crc kubenswrapper[4656]: I0128 15:41:50.478466 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:50 crc kubenswrapper[4656]: I0128 15:41:50.478778 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:41:51 crc kubenswrapper[4656]: I0128 15:41:51.534743 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hn7cf" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" probeResult="failure" output=< Jan 28 15:41:51 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:41:51 crc kubenswrapper[4656]: > Jan 28 15:42:00 crc kubenswrapper[4656]: I0128 15:42:00.574217 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:42:00 crc kubenswrapper[4656]: I0128 15:42:00.685396 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:42:02 crc kubenswrapper[4656]: I0128 15:42:02.470622 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:42:02 crc kubenswrapper[4656]: I0128 15:42:02.471065 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hn7cf" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" containerID="cri-o://000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" gracePeriod=2 Jan 28 15:42:02 crc kubenswrapper[4656]: I0128 15:42:02.958550 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.070913 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") pod \"64a04b9d-2d5a-4095-9292-c7d5de74f369\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.071051 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") pod \"64a04b9d-2d5a-4095-9292-c7d5de74f369\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.072316 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") pod \"64a04b9d-2d5a-4095-9292-c7d5de74f369\" (UID: \"64a04b9d-2d5a-4095-9292-c7d5de74f369\") " Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.073117 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities" (OuterVolumeSpecName: "utilities") pod "64a04b9d-2d5a-4095-9292-c7d5de74f369" (UID: "64a04b9d-2d5a-4095-9292-c7d5de74f369"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.076698 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx" (OuterVolumeSpecName: "kube-api-access-kftrx") pod "64a04b9d-2d5a-4095-9292-c7d5de74f369" (UID: "64a04b9d-2d5a-4095-9292-c7d5de74f369"). InnerVolumeSpecName "kube-api-access-kftrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.176940 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kftrx\" (UniqueName: \"kubernetes.io/projected/64a04b9d-2d5a-4095-9292-c7d5de74f369-kube-api-access-kftrx\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.176968 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.216621 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64a04b9d-2d5a-4095-9292-c7d5de74f369" (UID: "64a04b9d-2d5a-4095-9292-c7d5de74f369"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.269155 4656 generic.go:334] "Generic (PLEG): container finished" podID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerID="000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" exitCode=0 Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.269328 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hn7cf" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.269317 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerDied","Data":"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570"} Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.270467 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hn7cf" event={"ID":"64a04b9d-2d5a-4095-9292-c7d5de74f369","Type":"ContainerDied","Data":"bb25b8fc8ba237291897b37ef6bca9be8ae874b0b683f20ca0a47026007eccec"} Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.270515 4656 scope.go:117] "RemoveContainer" containerID="000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.279138 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a04b9d-2d5a-4095-9292-c7d5de74f369-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.294829 4656 scope.go:117] "RemoveContainer" containerID="929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.312527 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.317445 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hn7cf"] Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.324215 4656 scope.go:117] "RemoveContainer" containerID="7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.365947 4656 scope.go:117] "RemoveContainer" containerID="000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" Jan 28 15:42:03 crc kubenswrapper[4656]: E0128 15:42:03.366472 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570\": container with ID starting with 000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570 not found: ID does not exist" containerID="000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.366513 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570"} err="failed to get container status \"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570\": rpc error: code = NotFound desc = could not find container \"000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570\": container with ID starting with 000ec01ab57226a33a4f70550631c96bc1c16e7a32e2bf1ce80cb6d56732d570 not found: ID does not exist" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.366536 4656 scope.go:117] "RemoveContainer" containerID="929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473" Jan 28 15:42:03 crc kubenswrapper[4656]: E0128 15:42:03.366882 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473\": container with ID starting with 929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473 not found: ID does not exist" containerID="929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.366919 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473"} err="failed to get container status \"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473\": rpc error: code = NotFound desc = could not find container \"929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473\": container with ID starting with 929ed9b29fe5613b935c0804b783f56df2051dbc6c04e8c9bad303efa8d2d473 not found: ID does not exist" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.366941 4656 scope.go:117] "RemoveContainer" containerID="7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59" Jan 28 15:42:03 crc kubenswrapper[4656]: E0128 15:42:03.367194 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59\": container with ID starting with 7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59 not found: ID does not exist" containerID="7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59" Jan 28 15:42:03 crc kubenswrapper[4656]: I0128 15:42:03.367220 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59"} err="failed to get container status \"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59\": rpc error: code = NotFound desc = could not find container \"7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59\": container with ID starting with 7e65d97fc94a555183450f0f79bf7bf91ccfbc5a4d9511811a1603799aa60a59 not found: ID does not exist" Jan 28 15:42:05 crc kubenswrapper[4656]: I0128 15:42:05.180012 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" path="/var/lib/kubelet/pods/64a04b9d-2d5a-4095-9292-c7d5de74f369/volumes" Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.281070 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.281687 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.281756 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.282368 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:42:11 crc kubenswrapper[4656]: I0128 15:42:11.282445 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98" gracePeriod=600 Jan 28 15:42:12 crc kubenswrapper[4656]: I0128 15:42:12.342334 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98" exitCode=0 Jan 28 15:42:12 crc kubenswrapper[4656]: I0128 15:42:12.342399 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98"} Jan 28 15:42:12 crc kubenswrapper[4656]: I0128 15:42:12.342623 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9"} Jan 28 15:42:12 crc kubenswrapper[4656]: I0128 15:42:12.342644 4656 scope.go:117] "RemoveContainer" containerID="e4705c729984fa104745366d57583e3ee80c3a326cc35a32720920b368391441" Jan 28 15:43:32 crc kubenswrapper[4656]: I0128 15:43:32.228608 4656 scope.go:117] "RemoveContainer" containerID="58b49ff7d74020e7af2d978a9b51add46734f221caa85a5d1256bb5216b6bba6" Jan 28 15:43:32 crc kubenswrapper[4656]: I0128 15:43:32.263406 4656 scope.go:117] "RemoveContainer" containerID="3f3fb240badabe1f5e9c9d62c574ea57eeefcfffb2c7d9f13df634359844b112" Jan 28 15:43:32 crc kubenswrapper[4656]: I0128 15:43:32.294141 4656 scope.go:117] "RemoveContainer" containerID="0c99690cd99ff76bd306e4eb9caf6e9e98dd2c44cd01b18972ac6c91a1b608d2" Jan 28 15:43:32 crc kubenswrapper[4656]: I0128 15:43:32.331456 4656 scope.go:117] "RemoveContainer" containerID="88b915705c2ba5912b5cc50a01d396aa718cc235e8432ca2223a91e7cf085d37" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.619432 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:34 crc kubenswrapper[4656]: E0128 15:43:34.620128 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="extract-utilities" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.620155 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="extract-utilities" Jan 28 15:43:34 crc kubenswrapper[4656]: E0128 15:43:34.620204 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.620211 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" Jan 28 15:43:34 crc kubenswrapper[4656]: E0128 15:43:34.620228 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="extract-content" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.620235 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="extract-content" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.620405 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a04b9d-2d5a-4095-9292-c7d5de74f369" containerName="registry-server" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.621675 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.736762 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.825235 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.825312 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.825993 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.973185 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.973262 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.973310 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.973906 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:34 crc kubenswrapper[4656]: I0128 15:43:34.974214 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:35 crc kubenswrapper[4656]: I0128 15:43:35.028201 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") pod \"community-operators-b87zg\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:35 crc kubenswrapper[4656]: I0128 15:43:35.034617 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:35 crc kubenswrapper[4656]: I0128 15:43:35.828703 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:36 crc kubenswrapper[4656]: I0128 15:43:36.152320 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerStarted","Data":"2703433bec4599503b855edbd1fb46c81b97219681e70741b08613e845a37391"} Jan 28 15:43:37 crc kubenswrapper[4656]: I0128 15:43:37.162036 4656 generic.go:334] "Generic (PLEG): container finished" podID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerID="f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7" exitCode=0 Jan 28 15:43:37 crc kubenswrapper[4656]: I0128 15:43:37.162082 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerDied","Data":"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7"} Jan 28 15:43:39 crc kubenswrapper[4656]: I0128 15:43:39.193467 4656 generic.go:334] "Generic (PLEG): container finished" podID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerID="8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013" exitCode=0 Jan 28 15:43:39 crc kubenswrapper[4656]: I0128 15:43:39.193855 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerDied","Data":"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013"} Jan 28 15:43:41 crc kubenswrapper[4656]: I0128 15:43:41.218450 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerStarted","Data":"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270"} Jan 28 15:43:42 crc kubenswrapper[4656]: I0128 15:43:42.261585 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b87zg" podStartSLOduration=4.604751798 podStartE2EDuration="8.261553463s" podCreationTimestamp="2026-01-28 15:43:34 +0000 UTC" firstStartedPulling="2026-01-28 15:43:37.164638667 +0000 UTC m=+1507.672809461" lastFinishedPulling="2026-01-28 15:43:40.821440322 +0000 UTC m=+1511.329611126" observedRunningTime="2026-01-28 15:43:42.253325168 +0000 UTC m=+1512.761496002" watchObservedRunningTime="2026-01-28 15:43:42.261553463 +0000 UTC m=+1512.769724267" Jan 28 15:43:45 crc kubenswrapper[4656]: I0128 15:43:45.035798 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:45 crc kubenswrapper[4656]: I0128 15:43:45.036152 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:45 crc kubenswrapper[4656]: I0128 15:43:45.083869 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.087788 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.140615 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.328835 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b87zg" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="registry-server" containerID="cri-o://ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" gracePeriod=2 Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.799801 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.985980 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") pod \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.986308 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") pod \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.986371 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") pod \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\" (UID: \"8e9ee302-a975-4344-9d7e-0f1ee6594a55\") " Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.986871 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities" (OuterVolumeSpecName: "utilities") pod "8e9ee302-a975-4344-9d7e-0f1ee6594a55" (UID: "8e9ee302-a975-4344-9d7e-0f1ee6594a55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:43:55 crc kubenswrapper[4656]: I0128 15:43:55.995832 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9" (OuterVolumeSpecName: "kube-api-access-krdl9") pod "8e9ee302-a975-4344-9d7e-0f1ee6594a55" (UID: "8e9ee302-a975-4344-9d7e-0f1ee6594a55"). InnerVolumeSpecName "kube-api-access-krdl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.045442 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e9ee302-a975-4344-9d7e-0f1ee6594a55" (UID: "8e9ee302-a975-4344-9d7e-0f1ee6594a55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.087819 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.087858 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krdl9\" (UniqueName: \"kubernetes.io/projected/8e9ee302-a975-4344-9d7e-0f1ee6594a55-kube-api-access-krdl9\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.087868 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ee302-a975-4344-9d7e-0f1ee6594a55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.340320 4656 generic.go:334] "Generic (PLEG): container finished" podID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerID="ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" exitCode=0 Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.340418 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87zg" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.340416 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerDied","Data":"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270"} Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.341241 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87zg" event={"ID":"8e9ee302-a975-4344-9d7e-0f1ee6594a55","Type":"ContainerDied","Data":"2703433bec4599503b855edbd1fb46c81b97219681e70741b08613e845a37391"} Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.341291 4656 scope.go:117] "RemoveContainer" containerID="ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.369847 4656 scope.go:117] "RemoveContainer" containerID="8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.406488 4656 scope.go:117] "RemoveContainer" containerID="f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.411315 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.420482 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b87zg"] Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.429151 4656 scope.go:117] "RemoveContainer" containerID="ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" Jan 28 15:43:56 crc kubenswrapper[4656]: E0128 15:43:56.430184 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270\": container with ID starting with ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270 not found: ID does not exist" containerID="ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.430240 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270"} err="failed to get container status \"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270\": rpc error: code = NotFound desc = could not find container \"ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270\": container with ID starting with ac8d22769ed7811aa18734e3ed0cebe6fbbbdbf708a31c9c70656566e5e92270 not found: ID does not exist" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.430263 4656 scope.go:117] "RemoveContainer" containerID="8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013" Jan 28 15:43:56 crc kubenswrapper[4656]: E0128 15:43:56.430626 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013\": container with ID starting with 8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013 not found: ID does not exist" containerID="8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.430669 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013"} err="failed to get container status \"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013\": rpc error: code = NotFound desc = could not find container \"8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013\": container with ID starting with 8e52a4a0d7e8924ed5f4e999511dff8c0309b7fa26b64ed87d251a6f6d91f013 not found: ID does not exist" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.430700 4656 scope.go:117] "RemoveContainer" containerID="f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7" Jan 28 15:43:56 crc kubenswrapper[4656]: E0128 15:43:56.431011 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7\": container with ID starting with f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7 not found: ID does not exist" containerID="f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7" Jan 28 15:43:56 crc kubenswrapper[4656]: I0128 15:43:56.431035 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7"} err="failed to get container status \"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7\": rpc error: code = NotFound desc = could not find container \"f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7\": container with ID starting with f007d886050caa4d39dff8f76cda8d684cd02e3c0391eaa8dbdcf60b9af85fd7 not found: ID does not exist" Jan 28 15:43:57 crc kubenswrapper[4656]: I0128 15:43:57.180105 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" path="/var/lib/kubelet/pods/8e9ee302-a975-4344-9d7e-0f1ee6594a55/volumes" Jan 28 15:44:11 crc kubenswrapper[4656]: I0128 15:44:11.264116 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:44:11 crc kubenswrapper[4656]: I0128 15:44:11.264720 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.356181 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:23 crc kubenswrapper[4656]: E0128 15:44:23.357066 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="extract-utilities" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.357088 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="extract-utilities" Jan 28 15:44:23 crc kubenswrapper[4656]: E0128 15:44:23.357115 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="registry-server" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.357121 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="registry-server" Jan 28 15:44:23 crc kubenswrapper[4656]: E0128 15:44:23.357138 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="extract-content" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.357144 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="extract-content" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.357325 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9ee302-a975-4344-9d7e-0f1ee6594a55" containerName="registry-server" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.358484 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.377943 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.491686 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.491750 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.491822 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.593462 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.593530 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.593623 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.595247 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.595546 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.627774 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") pod \"redhat-marketplace-dwqxn\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:23 crc kubenswrapper[4656]: I0128 15:44:23.679918 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:24 crc kubenswrapper[4656]: I0128 15:44:24.013159 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:24 crc kubenswrapper[4656]: I0128 15:44:24.708530 4656 generic.go:334] "Generic (PLEG): container finished" podID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerID="97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070" exitCode=0 Jan 28 15:44:24 crc kubenswrapper[4656]: I0128 15:44:24.708585 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerDied","Data":"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070"} Jan 28 15:44:24 crc kubenswrapper[4656]: I0128 15:44:24.708622 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerStarted","Data":"bfc76d00ee3c2f24f1fa3ce1dd435cc123390dfbf2cea11eefc282f8669895aa"} Jan 28 15:44:26 crc kubenswrapper[4656]: I0128 15:44:26.724579 4656 generic.go:334] "Generic (PLEG): container finished" podID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerID="0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1" exitCode=0 Jan 28 15:44:26 crc kubenswrapper[4656]: I0128 15:44:26.725140 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerDied","Data":"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1"} Jan 28 15:44:27 crc kubenswrapper[4656]: I0128 15:44:27.740256 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerStarted","Data":"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a"} Jan 28 15:44:27 crc kubenswrapper[4656]: I0128 15:44:27.782586 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dwqxn" podStartSLOduration=2.40037058 podStartE2EDuration="4.782555509s" podCreationTimestamp="2026-01-28 15:44:23 +0000 UTC" firstStartedPulling="2026-01-28 15:44:24.710998225 +0000 UTC m=+1555.219169029" lastFinishedPulling="2026-01-28 15:44:27.093183154 +0000 UTC m=+1557.601353958" observedRunningTime="2026-01-28 15:44:27.771791181 +0000 UTC m=+1558.279961985" watchObservedRunningTime="2026-01-28 15:44:27.782555509 +0000 UTC m=+1558.290726313" Jan 28 15:44:32 crc kubenswrapper[4656]: I0128 15:44:32.412706 4656 scope.go:117] "RemoveContainer" containerID="83b94c30ed93547fd506be79bae97518f5ab107ea9f58e554bb480d642b3aaf6" Jan 28 15:44:32 crc kubenswrapper[4656]: I0128 15:44:32.440306 4656 scope.go:117] "RemoveContainer" containerID="bdbd8b9752b666c133af6189feff3118817c7063c0dd053bb54b7a6f4c3b19d7" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.681021 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.681092 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.726319 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.843521 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:33 crc kubenswrapper[4656]: I0128 15:44:33.967012 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:35 crc kubenswrapper[4656]: I0128 15:44:35.816428 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dwqxn" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="registry-server" containerID="cri-o://3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" gracePeriod=2 Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.324063 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.482892 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") pod \"14c94db8-ad16-4f24-aa39-c2d5248d4961\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.483036 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") pod \"14c94db8-ad16-4f24-aa39-c2d5248d4961\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.483254 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") pod \"14c94db8-ad16-4f24-aa39-c2d5248d4961\" (UID: \"14c94db8-ad16-4f24-aa39-c2d5248d4961\") " Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.484749 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities" (OuterVolumeSpecName: "utilities") pod "14c94db8-ad16-4f24-aa39-c2d5248d4961" (UID: "14c94db8-ad16-4f24-aa39-c2d5248d4961"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.489267 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7" (OuterVolumeSpecName: "kube-api-access-wm5k7") pod "14c94db8-ad16-4f24-aa39-c2d5248d4961" (UID: "14c94db8-ad16-4f24-aa39-c2d5248d4961"). InnerVolumeSpecName "kube-api-access-wm5k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.516220 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c94db8-ad16-4f24-aa39-c2d5248d4961" (UID: "14c94db8-ad16-4f24-aa39-c2d5248d4961"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.585274 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.585470 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c94db8-ad16-4f24-aa39-c2d5248d4961-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.585567 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm5k7\" (UniqueName: \"kubernetes.io/projected/14c94db8-ad16-4f24-aa39-c2d5248d4961-kube-api-access-wm5k7\") on node \"crc\" DevicePath \"\"" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.878779 4656 generic.go:334] "Generic (PLEG): container finished" podID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerID="3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" exitCode=0 Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.879968 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerDied","Data":"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a"} Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.880080 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dwqxn" event={"ID":"14c94db8-ad16-4f24-aa39-c2d5248d4961","Type":"ContainerDied","Data":"bfc76d00ee3c2f24f1fa3ce1dd435cc123390dfbf2cea11eefc282f8669895aa"} Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.880208 4656 scope.go:117] "RemoveContainer" containerID="3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.880485 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dwqxn" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.926352 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.931232 4656 scope.go:117] "RemoveContainer" containerID="0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.935943 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dwqxn"] Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.954365 4656 scope.go:117] "RemoveContainer" containerID="97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.991571 4656 scope.go:117] "RemoveContainer" containerID="3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" Jan 28 15:44:36 crc kubenswrapper[4656]: E0128 15:44:36.992136 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a\": container with ID starting with 3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a not found: ID does not exist" containerID="3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.992349 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a"} err="failed to get container status \"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a\": rpc error: code = NotFound desc = could not find container \"3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a\": container with ID starting with 3be6e23ba977c372e297ec59aabb72f7bf32c7fc3a6fe5cb3417369a352f695a not found: ID does not exist" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.992395 4656 scope.go:117] "RemoveContainer" containerID="0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1" Jan 28 15:44:36 crc kubenswrapper[4656]: E0128 15:44:36.992679 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1\": container with ID starting with 0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1 not found: ID does not exist" containerID="0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.992718 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1"} err="failed to get container status \"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1\": rpc error: code = NotFound desc = could not find container \"0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1\": container with ID starting with 0bc4e6628da16624bedbcdc5d9637815c6e2aed6dd3a3ab5a3bbae49d7c911d1 not found: ID does not exist" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.992749 4656 scope.go:117] "RemoveContainer" containerID="97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070" Jan 28 15:44:36 crc kubenswrapper[4656]: E0128 15:44:36.993014 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070\": container with ID starting with 97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070 not found: ID does not exist" containerID="97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070" Jan 28 15:44:36 crc kubenswrapper[4656]: I0128 15:44:36.993039 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070"} err="failed to get container status \"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070\": rpc error: code = NotFound desc = could not find container \"97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070\": container with ID starting with 97e25f7b00571e1cbdb65ebcbea84c4fe7ceda793eaa73b8e53ab6d7b4bf3070 not found: ID does not exist" Jan 28 15:44:37 crc kubenswrapper[4656]: I0128 15:44:37.200448 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" path="/var/lib/kubelet/pods/14c94db8-ad16-4f24-aa39-c2d5248d4961/volumes" Jan 28 15:44:41 crc kubenswrapper[4656]: I0128 15:44:41.264336 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:44:41 crc kubenswrapper[4656]: I0128 15:44:41.264906 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.174132 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r"] Jan 28 15:45:00 crc kubenswrapper[4656]: E0128 15:45:00.175137 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="extract-utilities" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.175173 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="extract-utilities" Jan 28 15:45:00 crc kubenswrapper[4656]: E0128 15:45:00.175210 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="extract-content" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.175217 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="extract-content" Jan 28 15:45:00 crc kubenswrapper[4656]: E0128 15:45:00.175232 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.175239 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.175464 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c94db8-ad16-4f24-aa39-c2d5248d4961" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.176996 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.179719 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.180023 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.193420 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r"] Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.363063 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.363413 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.363610 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.464676 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.464792 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.464851 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.465769 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.475360 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.487494 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") pod \"collect-profiles-29493585-qhh7r\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.498385 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:00 crc kubenswrapper[4656]: I0128 15:45:00.948620 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r"] Jan 28 15:45:01 crc kubenswrapper[4656]: I0128 15:45:01.098345 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" event={"ID":"bd321e99-8c7b-4ecc-8810-69978ab0d329","Type":"ContainerStarted","Data":"3d5a796e4b4b584a6d6e61b520c24a54b5dcfc28565e15bbf3f46659f5dcdd0a"} Jan 28 15:45:02 crc kubenswrapper[4656]: I0128 15:45:02.106793 4656 generic.go:334] "Generic (PLEG): container finished" podID="bd321e99-8c7b-4ecc-8810-69978ab0d329" containerID="5f9b0baaeb255e44970614227fb5114a7cc5769a0081687591976fc80050f1ac" exitCode=0 Jan 28 15:45:02 crc kubenswrapper[4656]: I0128 15:45:02.107113 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" event={"ID":"bd321e99-8c7b-4ecc-8810-69978ab0d329","Type":"ContainerDied","Data":"5f9b0baaeb255e44970614227fb5114a7cc5769a0081687591976fc80050f1ac"} Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.409535 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.512272 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") pod \"bd321e99-8c7b-4ecc-8810-69978ab0d329\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.512389 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") pod \"bd321e99-8c7b-4ecc-8810-69978ab0d329\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.512429 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") pod \"bd321e99-8c7b-4ecc-8810-69978ab0d329\" (UID: \"bd321e99-8c7b-4ecc-8810-69978ab0d329\") " Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.513316 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd321e99-8c7b-4ecc-8810-69978ab0d329" (UID: "bd321e99-8c7b-4ecc-8810-69978ab0d329"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.522543 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd321e99-8c7b-4ecc-8810-69978ab0d329" (UID: "bd321e99-8c7b-4ecc-8810-69978ab0d329"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.523585 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5" (OuterVolumeSpecName: "kube-api-access-bsqg5") pod "bd321e99-8c7b-4ecc-8810-69978ab0d329" (UID: "bd321e99-8c7b-4ecc-8810-69978ab0d329"). InnerVolumeSpecName "kube-api-access-bsqg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.614083 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd321e99-8c7b-4ecc-8810-69978ab0d329-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.614111 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsqg5\" (UniqueName: \"kubernetes.io/projected/bd321e99-8c7b-4ecc-8810-69978ab0d329-kube-api-access-bsqg5\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4656]: I0128 15:45:03.614124 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd321e99-8c7b-4ecc-8810-69978ab0d329-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:04 crc kubenswrapper[4656]: I0128 15:45:04.123837 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" event={"ID":"bd321e99-8c7b-4ecc-8810-69978ab0d329","Type":"ContainerDied","Data":"3d5a796e4b4b584a6d6e61b520c24a54b5dcfc28565e15bbf3f46659f5dcdd0a"} Jan 28 15:45:04 crc kubenswrapper[4656]: I0128 15:45:04.123875 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d5a796e4b4b584a6d6e61b520c24a54b5dcfc28565e15bbf3f46659f5dcdd0a" Jan 28 15:45:04 crc kubenswrapper[4656]: I0128 15:45:04.123916 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-qhh7r" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.263704 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.265044 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.265177 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.265906 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.266040 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" gracePeriod=600 Jan 28 15:45:11 crc kubenswrapper[4656]: E0128 15:45:11.401980 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.490539 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:11 crc kubenswrapper[4656]: E0128 15:45:11.490907 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd321e99-8c7b-4ecc-8810-69978ab0d329" containerName="collect-profiles" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.490920 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd321e99-8c7b-4ecc-8810-69978ab0d329" containerName="collect-profiles" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.491093 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd321e99-8c7b-4ecc-8810-69978ab0d329" containerName="collect-profiles" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.492215 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.510733 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.642937 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.643664 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.643842 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745123 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745188 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745242 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745804 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.745990 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.772904 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") pod \"certified-operators-8k6pt\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:11 crc kubenswrapper[4656]: I0128 15:45:11.810814 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.191598 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" exitCode=0 Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.191994 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9"} Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.192078 4656 scope.go:117] "RemoveContainer" containerID="42409758b852b659a7b9211c23462e509a808c14e072325970ea9f330c308f98" Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.192964 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:45:12 crc kubenswrapper[4656]: E0128 15:45:12.193659 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:45:12 crc kubenswrapper[4656]: I0128 15:45:12.390440 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:13 crc kubenswrapper[4656]: I0128 15:45:13.204302 4656 generic.go:334] "Generic (PLEG): container finished" podID="f9815183-b48b-4107-a4d5-91d208bc8850" containerID="229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646" exitCode=0 Jan 28 15:45:13 crc kubenswrapper[4656]: I0128 15:45:13.204356 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerDied","Data":"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646"} Jan 28 15:45:13 crc kubenswrapper[4656]: I0128 15:45:13.204383 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerStarted","Data":"bee71b8d16eeb832a63b5c0c256bdb5f880a13bf3f1a0762732f27116d5923dd"} Jan 28 15:45:14 crc kubenswrapper[4656]: I0128 15:45:14.215502 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerStarted","Data":"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98"} Jan 28 15:45:15 crc kubenswrapper[4656]: I0128 15:45:15.224607 4656 generic.go:334] "Generic (PLEG): container finished" podID="f9815183-b48b-4107-a4d5-91d208bc8850" containerID="fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98" exitCode=0 Jan 28 15:45:15 crc kubenswrapper[4656]: I0128 15:45:15.224676 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerDied","Data":"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98"} Jan 28 15:45:16 crc kubenswrapper[4656]: I0128 15:45:16.237824 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerStarted","Data":"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0"} Jan 28 15:45:16 crc kubenswrapper[4656]: I0128 15:45:16.263591 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8k6pt" podStartSLOduration=2.80355046 podStartE2EDuration="5.263569881s" podCreationTimestamp="2026-01-28 15:45:11 +0000 UTC" firstStartedPulling="2026-01-28 15:45:13.206068229 +0000 UTC m=+1603.714239043" lastFinishedPulling="2026-01-28 15:45:15.66608767 +0000 UTC m=+1606.174258464" observedRunningTime="2026-01-28 15:45:16.259524715 +0000 UTC m=+1606.767695519" watchObservedRunningTime="2026-01-28 15:45:16.263569881 +0000 UTC m=+1606.771740685" Jan 28 15:45:21 crc kubenswrapper[4656]: I0128 15:45:21.811271 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:21 crc kubenswrapper[4656]: I0128 15:45:21.811868 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:21 crc kubenswrapper[4656]: I0128 15:45:21.861287 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:22 crc kubenswrapper[4656]: I0128 15:45:22.393682 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:22 crc kubenswrapper[4656]: I0128 15:45:22.445865 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:24 crc kubenswrapper[4656]: I0128 15:45:24.178381 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:45:24 crc kubenswrapper[4656]: E0128 15:45:24.179018 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:45:24 crc kubenswrapper[4656]: I0128 15:45:24.364259 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8k6pt" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="registry-server" containerID="cri-o://e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" gracePeriod=2 Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.024217 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.208132 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") pod \"f9815183-b48b-4107-a4d5-91d208bc8850\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.209301 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") pod \"f9815183-b48b-4107-a4d5-91d208bc8850\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.209443 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") pod \"f9815183-b48b-4107-a4d5-91d208bc8850\" (UID: \"f9815183-b48b-4107-a4d5-91d208bc8850\") " Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.210130 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities" (OuterVolumeSpecName: "utilities") pod "f9815183-b48b-4107-a4d5-91d208bc8850" (UID: "f9815183-b48b-4107-a4d5-91d208bc8850"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.217521 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss" (OuterVolumeSpecName: "kube-api-access-nszss") pod "f9815183-b48b-4107-a4d5-91d208bc8850" (UID: "f9815183-b48b-4107-a4d5-91d208bc8850"). InnerVolumeSpecName "kube-api-access-nszss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.289345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9815183-b48b-4107-a4d5-91d208bc8850" (UID: "f9815183-b48b-4107-a4d5-91d208bc8850"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.312676 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.312731 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9815183-b48b-4107-a4d5-91d208bc8850-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.312744 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nszss\" (UniqueName: \"kubernetes.io/projected/f9815183-b48b-4107-a4d5-91d208bc8850-kube-api-access-nszss\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373065 4656 generic.go:334] "Generic (PLEG): container finished" podID="f9815183-b48b-4107-a4d5-91d208bc8850" containerID="e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" exitCode=0 Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373124 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerDied","Data":"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0"} Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373196 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8k6pt" event={"ID":"f9815183-b48b-4107-a4d5-91d208bc8850","Type":"ContainerDied","Data":"bee71b8d16eeb832a63b5c0c256bdb5f880a13bf3f1a0762732f27116d5923dd"} Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373225 4656 scope.go:117] "RemoveContainer" containerID="e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.373419 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8k6pt" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.411397 4656 scope.go:117] "RemoveContainer" containerID="fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.423263 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.431797 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8k6pt"] Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.436757 4656 scope.go:117] "RemoveContainer" containerID="229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.467835 4656 scope.go:117] "RemoveContainer" containerID="e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" Jan 28 15:45:25 crc kubenswrapper[4656]: E0128 15:45:25.468510 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0\": container with ID starting with e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0 not found: ID does not exist" containerID="e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.468584 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0"} err="failed to get container status \"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0\": rpc error: code = NotFound desc = could not find container \"e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0\": container with ID starting with e599f1aa1c331e4cf492cbd7f6f9ac1e7586ff52ded306db79ba02af5e22cda0 not found: ID does not exist" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.468631 4656 scope.go:117] "RemoveContainer" containerID="fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98" Jan 28 15:45:25 crc kubenswrapper[4656]: E0128 15:45:25.469145 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98\": container with ID starting with fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98 not found: ID does not exist" containerID="fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.469216 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98"} err="failed to get container status \"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98\": rpc error: code = NotFound desc = could not find container \"fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98\": container with ID starting with fd91a21103dd2cec20f48f4fbd5b1845e2a46003e97eb26c31b99430dccebe98 not found: ID does not exist" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.469241 4656 scope.go:117] "RemoveContainer" containerID="229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646" Jan 28 15:45:25 crc kubenswrapper[4656]: E0128 15:45:25.469578 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646\": container with ID starting with 229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646 not found: ID does not exist" containerID="229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646" Jan 28 15:45:25 crc kubenswrapper[4656]: I0128 15:45:25.469609 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646"} err="failed to get container status \"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646\": rpc error: code = NotFound desc = could not find container \"229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646\": container with ID starting with 229a7a33181f7ab8b71a98c94abb74e4f3631bb891ad59ac7c612e7b17927646 not found: ID does not exist" Jan 28 15:45:27 crc kubenswrapper[4656]: I0128 15:45:27.180862 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" path="/var/lib/kubelet/pods/f9815183-b48b-4107-a4d5-91d208bc8850/volumes" Jan 28 15:45:39 crc kubenswrapper[4656]: I0128 15:45:39.171098 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:45:39 crc kubenswrapper[4656]: E0128 15:45:39.171978 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:45:52 crc kubenswrapper[4656]: I0128 15:45:52.171411 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:45:52 crc kubenswrapper[4656]: E0128 15:45:52.172244 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:06 crc kubenswrapper[4656]: I0128 15:46:06.170909 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:06 crc kubenswrapper[4656]: E0128 15:46:06.171762 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:19 crc kubenswrapper[4656]: I0128 15:46:19.171008 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:19 crc kubenswrapper[4656]: E0128 15:46:19.172060 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:30 crc kubenswrapper[4656]: I0128 15:46:30.171684 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:30 crc kubenswrapper[4656]: E0128 15:46:30.172635 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:41 crc kubenswrapper[4656]: I0128 15:46:41.176646 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:41 crc kubenswrapper[4656]: E0128 15:46:41.177557 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:46:55 crc kubenswrapper[4656]: I0128 15:46:55.170617 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:46:55 crc kubenswrapper[4656]: E0128 15:46:55.171562 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:07 crc kubenswrapper[4656]: I0128 15:47:07.170671 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:07 crc kubenswrapper[4656]: E0128 15:47:07.171473 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:20 crc kubenswrapper[4656]: I0128 15:47:20.171495 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:20 crc kubenswrapper[4656]: E0128 15:47:20.172329 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:34 crc kubenswrapper[4656]: I0128 15:47:34.171858 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:34 crc kubenswrapper[4656]: E0128 15:47:34.174510 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:46 crc kubenswrapper[4656]: I0128 15:47:46.171877 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:46 crc kubenswrapper[4656]: E0128 15:47:46.172756 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.070915 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.083856 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.095562 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2q82c"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.102500 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.109334 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-b64sc"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.115479 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-91ca-account-create-update-lzb9w"] Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.181526 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32493d3d-ca02-451a-b1b0-51d4f82d54f3" path="/var/lib/kubelet/pods/32493d3d-ca02-451a-b1b0-51d4f82d54f3/volumes" Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.183008 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e7d30a-cf9c-4aa5-880f-e214b8694082" path="/var/lib/kubelet/pods/69e7d30a-cf9c-4aa5-880f-e214b8694082/volumes" Jan 28 15:47:49 crc kubenswrapper[4656]: I0128 15:47:49.183961 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4037dd9-6fe0-4f3a-9fca-4e716126a317" path="/var/lib/kubelet/pods/e4037dd9-6fe0-4f3a-9fca-4e716126a317/volumes" Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.042575 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.050002 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.056544 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.064450 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3d7e-account-create-update-6v56m"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.070871 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-gnzml"] Jan 28 15:47:50 crc kubenswrapper[4656]: I0128 15:47:50.076764 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-93fe-account-create-update-r2snq"] Jan 28 15:47:51 crc kubenswrapper[4656]: I0128 15:47:51.180119 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099dbe89-7289-453c-84a2-f0de86b792cf" path="/var/lib/kubelet/pods/099dbe89-7289-453c-84a2-f0de86b792cf/volumes" Jan 28 15:47:51 crc kubenswrapper[4656]: I0128 15:47:51.181177 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a6af64-b874-4449-ae51-8902df8e9bdf" path="/var/lib/kubelet/pods/c4a6af64-b874-4449-ae51-8902df8e9bdf/volumes" Jan 28 15:47:51 crc kubenswrapper[4656]: I0128 15:47:51.181751 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfcf1378-d339-426c-bd01-36cd47172c37" path="/var/lib/kubelet/pods/dfcf1378-d339-426c-bd01-36cd47172c37/volumes" Jan 28 15:47:57 crc kubenswrapper[4656]: I0128 15:47:57.029523 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:47:57 crc kubenswrapper[4656]: I0128 15:47:57.035967 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-slw4v"] Jan 28 15:47:57 crc kubenswrapper[4656]: I0128 15:47:57.171374 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:47:57 crc kubenswrapper[4656]: E0128 15:47:57.171619 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:47:57 crc kubenswrapper[4656]: I0128 15:47:57.180032 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df812f9-220d-45f7-aa2a-c26196ef62e5" path="/var/lib/kubelet/pods/1df812f9-220d-45f7-aa2a-c26196ef62e5/volumes" Jan 28 15:48:10 crc kubenswrapper[4656]: I0128 15:48:10.171330 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:48:10 crc kubenswrapper[4656]: E0128 15:48:10.172181 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:48:19 crc kubenswrapper[4656]: I0128 15:48:19.054952 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:48:19 crc kubenswrapper[4656]: I0128 15:48:19.060892 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4dvx6"] Jan 28 15:48:19 crc kubenswrapper[4656]: I0128 15:48:19.181095 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aed3d60-8ff4-4b82-9bf8-7892dff01cff" path="/var/lib/kubelet/pods/9aed3d60-8ff4-4b82-9bf8-7892dff01cff/volumes" Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.034885 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.042493 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.049751 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.056736 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.065487 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3131-account-create-update-6kp5h"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.075955 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vdvd7"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.082604 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a9b7-account-create-update-sf97j"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.088689 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2c7a-account-create-update-s5pbw"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.094330 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:48:20 crc kubenswrapper[4656]: I0128 15:48:20.099659 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-cc7jv"] Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.174283 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:48:21 crc kubenswrapper[4656]: E0128 15:48:21.174824 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.187702 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc7809e-fb0b-4e26-ad3b-45aceb483265" path="/var/lib/kubelet/pods/1dc7809e-fb0b-4e26-ad3b-45aceb483265/volumes" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.189339 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2455ddf8-bd67-4fe1-821e-0feda40d7da9" path="/var/lib/kubelet/pods/2455ddf8-bd67-4fe1-821e-0feda40d7da9/volumes" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.190509 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccf851bf-a272-4c0a-99a1-97c464d23a0d" path="/var/lib/kubelet/pods/ccf851bf-a272-4c0a-99a1-97c464d23a0d/volumes" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.191588 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf48bf2a-2b8d-41ac-a712-9218091e8352" path="/var/lib/kubelet/pods/cf48bf2a-2b8d-41ac-a712-9218091e8352/volumes" Jan 28 15:48:21 crc kubenswrapper[4656]: I0128 15:48:21.193601 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d3108c-ea49-46b7-896a-6303b5651abc" path="/var/lib/kubelet/pods/e4d3108c-ea49-46b7-896a-6303b5651abc/volumes" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.622561 4656 scope.go:117] "RemoveContainer" containerID="00af603897747403f2999c8dd0ea82db99373770b631a4b1c83fa47765ccde4d" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.646946 4656 scope.go:117] "RemoveContainer" containerID="2b87d65405f263ac45d36349abb81b3c7c7ec205bf002789539910deb242d968" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.686565 4656 scope.go:117] "RemoveContainer" containerID="3dde6ae74f513f5fb2d842f4ed02820c2b2f634f4c0cbc22132cce071c91ef58" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.720056 4656 scope.go:117] "RemoveContainer" containerID="ae2e87be8e9439b8c799f033078d2b5429004fda019d053d0cb63bfa319ab598" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.758657 4656 scope.go:117] "RemoveContainer" containerID="266a34fa5d7bf2b9b096e72f7c9b6721bd869c8c77436d0e440426fb23a20543" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.785494 4656 scope.go:117] "RemoveContainer" containerID="8bb388635794bed8d36ed9b59e0ebe5aa23722a91994b61ca6e7ca5038083d1d" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.826449 4656 scope.go:117] "RemoveContainer" containerID="54efb2e0c946cd8bb2a3f18919e54300b18699ccd32fa90c9ddedf9982d9a734" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.847690 4656 scope.go:117] "RemoveContainer" containerID="d0d1b804843eff8d434244a805f92f46f9f0772ead6e44bf1c0d417856331fa2" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.872679 4656 scope.go:117] "RemoveContainer" containerID="02f441349595b2fea556f764337befa680c75f21f42833da737a52fe0e77200f" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.890933 4656 scope.go:117] "RemoveContainer" containerID="2365be7a0a671764daf45f822757fffcf6c88cb4fb34815a2047bee431512215" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.911860 4656 scope.go:117] "RemoveContainer" containerID="810dbe4d5afc6a6d7cc6184ae641765eef2d6efff2d1a416b9a80f9cc06da73c" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.930530 4656 scope.go:117] "RemoveContainer" containerID="5f4d77dc94bab7589c74c06da6f776ffa81bcf97ec1f8a2199e44a457c390fb6" Jan 28 15:48:32 crc kubenswrapper[4656]: I0128 15:48:32.956155 4656 scope.go:117] "RemoveContainer" containerID="6fe56555d0ff628c998638556e5ceea7a70562af71008d6c468acec1f6303046" Jan 28 15:48:34 crc kubenswrapper[4656]: I0128 15:48:34.034655 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:48:34 crc kubenswrapper[4656]: I0128 15:48:34.047957 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-842wg"] Jan 28 15:48:35 crc kubenswrapper[4656]: I0128 15:48:35.179271 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72cfa9c1-01ab-4c7e-80fa-f99e63b2602c" path="/var/lib/kubelet/pods/72cfa9c1-01ab-4c7e-80fa-f99e63b2602c/volumes" Jan 28 15:48:36 crc kubenswrapper[4656]: I0128 15:48:36.170370 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:48:36 crc kubenswrapper[4656]: E0128 15:48:36.170792 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:48:47 crc kubenswrapper[4656]: I0128 15:48:47.170480 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:48:47 crc kubenswrapper[4656]: E0128 15:48:47.171210 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:01 crc kubenswrapper[4656]: I0128 15:49:01.174964 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:01 crc kubenswrapper[4656]: E0128 15:49:01.175472 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:14 crc kubenswrapper[4656]: I0128 15:49:14.172113 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:14 crc kubenswrapper[4656]: E0128 15:49:14.173028 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:26 crc kubenswrapper[4656]: I0128 15:49:26.171426 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:26 crc kubenswrapper[4656]: E0128 15:49:26.172039 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:33 crc kubenswrapper[4656]: I0128 15:49:33.152376 4656 scope.go:117] "RemoveContainer" containerID="b5f01d437841162f7d723c8a3461fbed8e9508da364b69783ba5047744e009bb" Jan 28 15:49:37 crc kubenswrapper[4656]: I0128 15:49:37.171311 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:37 crc kubenswrapper[4656]: E0128 15:49:37.173036 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:49:50 crc kubenswrapper[4656]: I0128 15:49:50.170827 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:49:50 crc kubenswrapper[4656]: E0128 15:49:50.171486 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:50:04 crc kubenswrapper[4656]: I0128 15:50:04.171438 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:50:04 crc kubenswrapper[4656]: E0128 15:50:04.173945 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:50:19 crc kubenswrapper[4656]: I0128 15:50:19.171766 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:50:19 crc kubenswrapper[4656]: I0128 15:50:19.982584 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0"} Jan 28 15:52:41 crc kubenswrapper[4656]: I0128 15:52:41.264202 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:52:41 crc kubenswrapper[4656]: I0128 15:52:41.264911 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:53:11 crc kubenswrapper[4656]: I0128 15:53:11.263812 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:53:11 crc kubenswrapper[4656]: I0128 15:53:11.264493 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.263985 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.264680 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.264761 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.265522 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.265597 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0" gracePeriod=600 Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.774647 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0" exitCode=0 Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.774696 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0"} Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.775005 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a"} Jan 28 15:53:41 crc kubenswrapper[4656]: I0128 15:53:41.775054 4656 scope.go:117] "RemoveContainer" containerID="88adb30c3b91561ad1e9311ab2fa663c8d0e3b65a35997f7b52c2ecfaeef7bb9" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.936253 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:53:53 crc kubenswrapper[4656]: E0128 15:53:53.937300 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="extract-utilities" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.937326 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="extract-utilities" Jan 28 15:53:53 crc kubenswrapper[4656]: E0128 15:53:53.937348 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="registry-server" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.937357 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="registry-server" Jan 28 15:53:53 crc kubenswrapper[4656]: E0128 15:53:53.937384 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="extract-content" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.937393 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="extract-content" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.937618 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9815183-b48b-4107-a4d5-91d208bc8850" containerName="registry-server" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.939130 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:53 crc kubenswrapper[4656]: I0128 15:53:53.951766 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.006157 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.006243 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.006358 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108101 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108209 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108244 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108850 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.108847 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.129785 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") pod \"community-operators-twk2d\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.263508 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:53:54 crc kubenswrapper[4656]: I0128 15:53:54.896492 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:53:55 crc kubenswrapper[4656]: I0128 15:53:55.885648 4656 generic.go:334] "Generic (PLEG): container finished" podID="e57c2aaa-e624-4554-a216-411263758595" containerID="90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878" exitCode=0 Jan 28 15:53:55 crc kubenswrapper[4656]: I0128 15:53:55.886048 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerDied","Data":"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878"} Jan 28 15:53:55 crc kubenswrapper[4656]: I0128 15:53:55.886102 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerStarted","Data":"6264cbc4dfb3251a994990b3e8fa424077d3c85f1ecb113417ca9e2569d22b7c"} Jan 28 15:53:55 crc kubenswrapper[4656]: I0128 15:53:55.890979 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:53:56 crc kubenswrapper[4656]: I0128 15:53:56.895967 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerStarted","Data":"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d"} Jan 28 15:53:57 crc kubenswrapper[4656]: I0128 15:53:57.905080 4656 generic.go:334] "Generic (PLEG): container finished" podID="e57c2aaa-e624-4554-a216-411263758595" containerID="8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d" exitCode=0 Jan 28 15:53:57 crc kubenswrapper[4656]: I0128 15:53:57.905454 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerDied","Data":"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d"} Jan 28 15:53:57 crc kubenswrapper[4656]: I0128 15:53:57.905486 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerStarted","Data":"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19"} Jan 28 15:53:57 crc kubenswrapper[4656]: I0128 15:53:57.930140 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-twk2d" podStartSLOduration=3.498191428 podStartE2EDuration="4.930123764s" podCreationTimestamp="2026-01-28 15:53:53 +0000 UTC" firstStartedPulling="2026-01-28 15:53:55.890614682 +0000 UTC m=+2126.398785486" lastFinishedPulling="2026-01-28 15:53:57.322547018 +0000 UTC m=+2127.830717822" observedRunningTime="2026-01-28 15:53:57.928899429 +0000 UTC m=+2128.437070233" watchObservedRunningTime="2026-01-28 15:53:57.930123764 +0000 UTC m=+2128.438294568" Jan 28 15:54:04 crc kubenswrapper[4656]: I0128 15:54:04.264681 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:04 crc kubenswrapper[4656]: I0128 15:54:04.265488 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:04 crc kubenswrapper[4656]: I0128 15:54:04.308319 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:05 crc kubenswrapper[4656]: I0128 15:54:05.004268 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:05 crc kubenswrapper[4656]: I0128 15:54:05.053345 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.034327 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-twk2d" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="registry-server" containerID="cri-o://0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" gracePeriod=2 Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.500986 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.542580 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") pod \"e57c2aaa-e624-4554-a216-411263758595\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.542664 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") pod \"e57c2aaa-e624-4554-a216-411263758595\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.542708 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") pod \"e57c2aaa-e624-4554-a216-411263758595\" (UID: \"e57c2aaa-e624-4554-a216-411263758595\") " Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.544116 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities" (OuterVolumeSpecName: "utilities") pod "e57c2aaa-e624-4554-a216-411263758595" (UID: "e57c2aaa-e624-4554-a216-411263758595"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.549407 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5" (OuterVolumeSpecName: "kube-api-access-7sjc5") pod "e57c2aaa-e624-4554-a216-411263758595" (UID: "e57c2aaa-e624-4554-a216-411263758595"). InnerVolumeSpecName "kube-api-access-7sjc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.645085 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:07 crc kubenswrapper[4656]: I0128 15:54:07.645126 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sjc5\" (UniqueName: \"kubernetes.io/projected/e57c2aaa-e624-4554-a216-411263758595-kube-api-access-7sjc5\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047134 4656 generic.go:334] "Generic (PLEG): container finished" podID="e57c2aaa-e624-4554-a216-411263758595" containerID="0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" exitCode=0 Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047215 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerDied","Data":"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19"} Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047247 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twk2d" event={"ID":"e57c2aaa-e624-4554-a216-411263758595","Type":"ContainerDied","Data":"6264cbc4dfb3251a994990b3e8fa424077d3c85f1ecb113417ca9e2569d22b7c"} Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047268 4656 scope.go:117] "RemoveContainer" containerID="0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.047430 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twk2d" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.089299 4656 scope.go:117] "RemoveContainer" containerID="8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.195468 4656 scope.go:117] "RemoveContainer" containerID="90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.226883 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e57c2aaa-e624-4554-a216-411263758595" (UID: "e57c2aaa-e624-4554-a216-411263758595"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.271928 4656 scope.go:117] "RemoveContainer" containerID="0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" Jan 28 15:54:08 crc kubenswrapper[4656]: E0128 15:54:08.272714 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19\": container with ID starting with 0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19 not found: ID does not exist" containerID="0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.272785 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19"} err="failed to get container status \"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19\": rpc error: code = NotFound desc = could not find container \"0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19\": container with ID starting with 0dcf6daca124a4edd25bafdaefb54b78a6b5327bc7c3cd63152af5f51806ee19 not found: ID does not exist" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.272857 4656 scope.go:117] "RemoveContainer" containerID="8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d" Jan 28 15:54:08 crc kubenswrapper[4656]: E0128 15:54:08.273216 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d\": container with ID starting with 8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d not found: ID does not exist" containerID="8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.273280 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d"} err="failed to get container status \"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d\": rpc error: code = NotFound desc = could not find container \"8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d\": container with ID starting with 8baa311d2251638a188a6b5a565e73cfaccd85a412f4055095b69d9b15c6fb2d not found: ID does not exist" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.273313 4656 scope.go:117] "RemoveContainer" containerID="90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878" Jan 28 15:54:08 crc kubenswrapper[4656]: E0128 15:54:08.273829 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878\": container with ID starting with 90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878 not found: ID does not exist" containerID="90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.273871 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878"} err="failed to get container status \"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878\": rpc error: code = NotFound desc = could not find container \"90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878\": container with ID starting with 90b1034a9c822fc5a96d9be445e0c12499035edd5e20000879b8981e2ef71878 not found: ID does not exist" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.296943 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e57c2aaa-e624-4554-a216-411263758595-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.381966 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:54:08 crc kubenswrapper[4656]: I0128 15:54:08.389976 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-twk2d"] Jan 28 15:54:09 crc kubenswrapper[4656]: I0128 15:54:09.181542 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57c2aaa-e624-4554-a216-411263758595" path="/var/lib/kubelet/pods/e57c2aaa-e624-4554-a216-411263758595/volumes" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.380519 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:27 crc kubenswrapper[4656]: E0128 15:54:27.381516 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="extract-utilities" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.381540 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="extract-utilities" Jan 28 15:54:27 crc kubenswrapper[4656]: E0128 15:54:27.381587 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="extract-content" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.381596 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="extract-content" Jan 28 15:54:27 crc kubenswrapper[4656]: E0128 15:54:27.381612 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="registry-server" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.381620 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="registry-server" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.381830 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57c2aaa-e624-4554-a216-411263758595" containerName="registry-server" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.387380 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.401662 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.560851 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.561037 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.561097 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.663529 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.663648 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.663700 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.664733 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.664776 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.696100 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") pod \"redhat-marketplace-n5zgq\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:27 crc kubenswrapper[4656]: I0128 15:54:27.721981 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:28 crc kubenswrapper[4656]: I0128 15:54:28.210073 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:28 crc kubenswrapper[4656]: E0128 15:54:28.776796 4656 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb182725_ad9b_4f0d_a39d_2894dd0e9eec.slice/crio-conmon-df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb182725_ad9b_4f0d_a39d_2894dd0e9eec.slice/crio-df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:54:29 crc kubenswrapper[4656]: I0128 15:54:29.217519 4656 generic.go:334] "Generic (PLEG): container finished" podID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerID="df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087" exitCode=0 Jan 28 15:54:29 crc kubenswrapper[4656]: I0128 15:54:29.217570 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerDied","Data":"df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087"} Jan 28 15:54:29 crc kubenswrapper[4656]: I0128 15:54:29.217602 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerStarted","Data":"4f3afce81c85cdea5251cb0f237aedd7ef85ea415e43fa2fbb9a8029586bf277"} Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.369390 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rzhzn"] Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.371516 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.383790 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzhzn"] Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.513671 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-utilities\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.513965 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-catalog-content\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.514246 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxpsd\" (UniqueName: \"kubernetes.io/projected/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-kube-api-access-hxpsd\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.616365 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-utilities\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.616709 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-catalog-content\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.616864 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxpsd\" (UniqueName: \"kubernetes.io/projected/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-kube-api-access-hxpsd\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.616955 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-utilities\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.617081 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-catalog-content\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.638018 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxpsd\" (UniqueName: \"kubernetes.io/projected/885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9-kube-api-access-hxpsd\") pod \"redhat-operators-rzhzn\" (UID: \"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9\") " pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:30 crc kubenswrapper[4656]: I0128 15:54:30.698470 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:54:31 crc kubenswrapper[4656]: W0128 15:54:31.393605 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod885e9bc9_ca09_4e4e_95ef_7b95d52c8dc9.slice/crio-0ca613556205ab6edb8e1954b3599b5997c08538a675d876d4aecb8bd3951d72 WatchSource:0}: Error finding container 0ca613556205ab6edb8e1954b3599b5997c08538a675d876d4aecb8bd3951d72: Status 404 returned error can't find the container with id 0ca613556205ab6edb8e1954b3599b5997c08538a675d876d4aecb8bd3951d72 Jan 28 15:54:31 crc kubenswrapper[4656]: I0128 15:54:31.401776 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzhzn"] Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.243493 4656 generic.go:334] "Generic (PLEG): container finished" podID="885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9" containerID="48098686c1686f1d4d282eec6040cca09079c828b7580f6e6566d71b1092992e" exitCode=0 Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.243556 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerDied","Data":"48098686c1686f1d4d282eec6040cca09079c828b7580f6e6566d71b1092992e"} Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.243979 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerStarted","Data":"0ca613556205ab6edb8e1954b3599b5997c08538a675d876d4aecb8bd3951d72"} Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.246982 4656 generic.go:334] "Generic (PLEG): container finished" podID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerID="2a0f04e479398ba85f28717023d2a1723ae7b04de613eda29938edd64f0ea69e" exitCode=0 Jan 28 15:54:32 crc kubenswrapper[4656]: I0128 15:54:32.247031 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerDied","Data":"2a0f04e479398ba85f28717023d2a1723ae7b04de613eda29938edd64f0ea69e"} Jan 28 15:54:34 crc kubenswrapper[4656]: I0128 15:54:34.267085 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerStarted","Data":"a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25"} Jan 28 15:54:34 crc kubenswrapper[4656]: I0128 15:54:34.297029 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5zgq" podStartSLOduration=3.091862315 podStartE2EDuration="7.296999743s" podCreationTimestamp="2026-01-28 15:54:27 +0000 UTC" firstStartedPulling="2026-01-28 15:54:29.220446134 +0000 UTC m=+2159.728616938" lastFinishedPulling="2026-01-28 15:54:33.425583562 +0000 UTC m=+2163.933754366" observedRunningTime="2026-01-28 15:54:34.290003244 +0000 UTC m=+2164.798174038" watchObservedRunningTime="2026-01-28 15:54:34.296999743 +0000 UTC m=+2164.805170557" Jan 28 15:54:37 crc kubenswrapper[4656]: I0128 15:54:37.722944 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:37 crc kubenswrapper[4656]: I0128 15:54:37.723793 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:37 crc kubenswrapper[4656]: I0128 15:54:37.768365 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:38 crc kubenswrapper[4656]: I0128 15:54:38.348882 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:38 crc kubenswrapper[4656]: I0128 15:54:38.398851 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:40 crc kubenswrapper[4656]: I0128 15:54:40.329591 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5zgq" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="registry-server" containerID="cri-o://a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25" gracePeriod=2 Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.354646 4656 generic.go:334] "Generic (PLEG): container finished" podID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerID="a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25" exitCode=0 Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.354712 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerDied","Data":"a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25"} Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.704752 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.836252 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") pod \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.836293 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") pod \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.836528 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") pod \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\" (UID: \"cb182725-ad9b-4f0d-a39d-2894dd0e9eec\") " Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.837090 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities" (OuterVolumeSpecName: "utilities") pod "cb182725-ad9b-4f0d-a39d-2894dd0e9eec" (UID: "cb182725-ad9b-4f0d-a39d-2894dd0e9eec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.841792 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf" (OuterVolumeSpecName: "kube-api-access-js5cf") pod "cb182725-ad9b-4f0d-a39d-2894dd0e9eec" (UID: "cb182725-ad9b-4f0d-a39d-2894dd0e9eec"). InnerVolumeSpecName "kube-api-access-js5cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.851705 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb182725-ad9b-4f0d-a39d-2894dd0e9eec" (UID: "cb182725-ad9b-4f0d-a39d-2894dd0e9eec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.938378 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.938414 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:43 crc kubenswrapper[4656]: I0128 15:54:43.938429 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js5cf\" (UniqueName: \"kubernetes.io/projected/cb182725-ad9b-4f0d-a39d-2894dd0e9eec-kube-api-access-js5cf\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.368936 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5zgq" event={"ID":"cb182725-ad9b-4f0d-a39d-2894dd0e9eec","Type":"ContainerDied","Data":"4f3afce81c85cdea5251cb0f237aedd7ef85ea415e43fa2fbb9a8029586bf277"} Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.369049 4656 scope.go:117] "RemoveContainer" containerID="a57ed383ffe0656b19a7d349ff6ccd07470e1ec15490834868bbec5370166a25" Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.369062 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5zgq" Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.419017 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.424145 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5zgq"] Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.836083 4656 scope.go:117] "RemoveContainer" containerID="2a0f04e479398ba85f28717023d2a1723ae7b04de613eda29938edd64f0ea69e" Jan 28 15:54:44 crc kubenswrapper[4656]: I0128 15:54:44.936182 4656 scope.go:117] "RemoveContainer" containerID="df6efca17129572fe0168f8842215e2146f762458f4fe160548e67caf45b4087" Jan 28 15:54:45 crc kubenswrapper[4656]: I0128 15:54:45.180291 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" path="/var/lib/kubelet/pods/cb182725-ad9b-4f0d-a39d-2894dd0e9eec/volumes" Jan 28 15:54:45 crc kubenswrapper[4656]: I0128 15:54:45.381197 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerStarted","Data":"3bb46874ceec0f9341d3ac5399ed19efe332d91056be4c194b5ce973cec48a7e"} Jan 28 15:54:46 crc kubenswrapper[4656]: I0128 15:54:46.408004 4656 generic.go:334] "Generic (PLEG): container finished" podID="885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9" containerID="3bb46874ceec0f9341d3ac5399ed19efe332d91056be4c194b5ce973cec48a7e" exitCode=0 Jan 28 15:54:46 crc kubenswrapper[4656]: I0128 15:54:46.408057 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerDied","Data":"3bb46874ceec0f9341d3ac5399ed19efe332d91056be4c194b5ce973cec48a7e"} Jan 28 15:54:57 crc kubenswrapper[4656]: I0128 15:54:57.508500 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rzhzn" event={"ID":"885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9","Type":"ContainerStarted","Data":"d084efcc9a88194fbe3d0e7103e6b7833de09a444bca022469c59ded4ad82d3b"} Jan 28 15:54:58 crc kubenswrapper[4656]: I0128 15:54:58.540332 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rzhzn" podStartSLOduration=3.57546239 podStartE2EDuration="28.540305989s" podCreationTimestamp="2026-01-28 15:54:30 +0000 UTC" firstStartedPulling="2026-01-28 15:54:32.245665643 +0000 UTC m=+2162.753836447" lastFinishedPulling="2026-01-28 15:54:57.210509242 +0000 UTC m=+2187.718680046" observedRunningTime="2026-01-28 15:54:58.538180639 +0000 UTC m=+2189.046351443" watchObservedRunningTime="2026-01-28 15:54:58.540305989 +0000 UTC m=+2189.048476793" Jan 28 15:55:00 crc kubenswrapper[4656]: I0128 15:55:00.698899 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:55:00 crc kubenswrapper[4656]: I0128 15:55:00.699194 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:55:01 crc kubenswrapper[4656]: I0128 15:55:01.860685 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rzhzn" podUID="885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9" containerName="registry-server" probeResult="failure" output=< Jan 28 15:55:01 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 15:55:01 crc kubenswrapper[4656]: > Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.745591 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.796893 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rzhzn" Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.896010 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rzhzn"] Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.988411 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:55:10 crc kubenswrapper[4656]: I0128 15:55:10.988932 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gbjhq" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="registry-server" containerID="cri-o://7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a" gracePeriod=2 Jan 28 15:55:11 crc kubenswrapper[4656]: I0128 15:55:11.652959 4656 generic.go:334] "Generic (PLEG): container finished" podID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerID="7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a" exitCode=0 Jan 28 15:55:11 crc kubenswrapper[4656]: I0128 15:55:11.653177 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerDied","Data":"7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a"} Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.180764 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.240969 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") pod \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.241058 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") pod \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.241150 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") pod \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\" (UID: \"ea7644c9-f50c-43f8-8165-3fa375c3b9c0\") " Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.241451 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities" (OuterVolumeSpecName: "utilities") pod "ea7644c9-f50c-43f8-8165-3fa375c3b9c0" (UID: "ea7644c9-f50c-43f8-8165-3fa375c3b9c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.264347 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv" (OuterVolumeSpecName: "kube-api-access-b6bdv") pod "ea7644c9-f50c-43f8-8165-3fa375c3b9c0" (UID: "ea7644c9-f50c-43f8-8165-3fa375c3b9c0"). InnerVolumeSpecName "kube-api-access-b6bdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.342770 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.343059 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6bdv\" (UniqueName: \"kubernetes.io/projected/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-kube-api-access-b6bdv\") on node \"crc\" DevicePath \"\"" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.350252 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea7644c9-f50c-43f8-8165-3fa375c3b9c0" (UID: "ea7644c9-f50c-43f8-8165-3fa375c3b9c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.444448 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7644c9-f50c-43f8-8165-3fa375c3b9c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.662535 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbjhq" event={"ID":"ea7644c9-f50c-43f8-8165-3fa375c3b9c0","Type":"ContainerDied","Data":"2dd598bb76a09812057b8dc1896e04e054099bec7b15c382202025f6bc1dcb53"} Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.662629 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbjhq" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.662640 4656 scope.go:117] "RemoveContainer" containerID="7357e3dce7c91d9015f9378573757a7ac9f266086a426319f16d40934b333d4a" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.689106 4656 scope.go:117] "RemoveContainer" containerID="98395829abb715f8be4df3366fdfa04876abca34d58e9c479a1d7e699b8ca848" Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.698316 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.704935 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gbjhq"] Jan 28 15:55:12 crc kubenswrapper[4656]: I0128 15:55:12.721531 4656 scope.go:117] "RemoveContainer" containerID="df16b6a6ba81e2b7746a8de266aea33bfee9014a3f26a0a2bc636c8468b4da77" Jan 28 15:55:13 crc kubenswrapper[4656]: I0128 15:55:13.179323 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" path="/var/lib/kubelet/pods/ea7644c9-f50c-43f8-8165-3fa375c3b9c0/volumes" Jan 28 15:55:41 crc kubenswrapper[4656]: I0128 15:55:41.263816 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:55:41 crc kubenswrapper[4656]: I0128 15:55:41.264571 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:56:11 crc kubenswrapper[4656]: I0128 15:56:11.263838 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:56:11 crc kubenswrapper[4656]: I0128 15:56:11.264487 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.263798 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.264498 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.264582 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.265974 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.266076 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" gracePeriod=600 Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.706071 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" exitCode=0 Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.706147 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a"} Jan 28 15:56:41 crc kubenswrapper[4656]: I0128 15:56:41.706264 4656 scope.go:117] "RemoveContainer" containerID="bb32448722780b8d9f530e9024e281722b1f8d106f54d17c562c9b55376475e0" Jan 28 15:56:41 crc kubenswrapper[4656]: E0128 15:56:41.906356 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:56:42 crc kubenswrapper[4656]: I0128 15:56:42.719098 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:56:42 crc kubenswrapper[4656]: E0128 15:56:42.719657 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:56:54 crc kubenswrapper[4656]: I0128 15:56:54.170627 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:56:54 crc kubenswrapper[4656]: E0128 15:56:54.171543 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:57:08 crc kubenswrapper[4656]: I0128 15:57:08.170850 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:57:08 crc kubenswrapper[4656]: E0128 15:57:08.171616 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:57:21 crc kubenswrapper[4656]: I0128 15:57:21.175603 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:57:21 crc kubenswrapper[4656]: E0128 15:57:21.176667 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:57:32 crc kubenswrapper[4656]: I0128 15:57:32.171248 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:57:32 crc kubenswrapper[4656]: E0128 15:57:32.171859 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:57:45 crc kubenswrapper[4656]: I0128 15:57:45.170763 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:57:45 crc kubenswrapper[4656]: E0128 15:57:45.171678 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:00 crc kubenswrapper[4656]: I0128 15:58:00.171386 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:00 crc kubenswrapper[4656]: E0128 15:58:00.172175 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:14 crc kubenswrapper[4656]: I0128 15:58:14.171000 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:14 crc kubenswrapper[4656]: E0128 15:58:14.171996 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:26 crc kubenswrapper[4656]: I0128 15:58:26.171915 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:26 crc kubenswrapper[4656]: E0128 15:58:26.172878 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:40 crc kubenswrapper[4656]: I0128 15:58:40.171027 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:40 crc kubenswrapper[4656]: E0128 15:58:40.171784 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:58:54 crc kubenswrapper[4656]: I0128 15:58:54.171259 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:58:54 crc kubenswrapper[4656]: E0128 15:58:54.172354 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.632040 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.633200 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.633225 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.633269 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.633281 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.634564 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="extract-content" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634586 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="extract-content" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.634606 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="extract-utilities" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634615 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="extract-utilities" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.634634 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="extract-utilities" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634643 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="extract-utilities" Jan 28 15:59:05 crc kubenswrapper[4656]: E0128 15:59:05.634660 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="extract-content" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634668 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="extract-content" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.634970 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb182725-ad9b-4f0d-a39d-2894dd0e9eec" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.635069 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7644c9-f50c-43f8-8165-3fa375c3b9c0" containerName="registry-server" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.636783 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.669612 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.818395 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.818514 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.818856 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.920859 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.920998 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.921031 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.921502 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.921608 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.950434 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") pod \"certified-operators-ghkdq\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:05 crc kubenswrapper[4656]: I0128 15:59:05.965866 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.643910 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:06 crc kubenswrapper[4656]: W0128 15:59:06.655629 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb90e1d72_b1d7_4ea8_bb09_a3803e370d88.slice/crio-ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd WatchSource:0}: Error finding container ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd: Status 404 returned error can't find the container with id ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.957404 4656 generic.go:334] "Generic (PLEG): container finished" podID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerID="515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2" exitCode=0 Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.957482 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerDied","Data":"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2"} Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.957761 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerStarted","Data":"ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd"} Jan 28 15:59:06 crc kubenswrapper[4656]: I0128 15:59:06.959574 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:59:07 crc kubenswrapper[4656]: I0128 15:59:07.171227 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:07 crc kubenswrapper[4656]: E0128 15:59:07.171540 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:08 crc kubenswrapper[4656]: I0128 15:59:08.979949 4656 generic.go:334] "Generic (PLEG): container finished" podID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerID="dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe" exitCode=0 Jan 28 15:59:08 crc kubenswrapper[4656]: I0128 15:59:08.980258 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerDied","Data":"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe"} Jan 28 15:59:10 crc kubenswrapper[4656]: I0128 15:59:10.998339 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerStarted","Data":"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd"} Jan 28 15:59:11 crc kubenswrapper[4656]: I0128 15:59:11.031878 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ghkdq" podStartSLOduration=3.177657461 podStartE2EDuration="6.031834411s" podCreationTimestamp="2026-01-28 15:59:05 +0000 UTC" firstStartedPulling="2026-01-28 15:59:06.959223922 +0000 UTC m=+2437.467394726" lastFinishedPulling="2026-01-28 15:59:09.813400872 +0000 UTC m=+2440.321571676" observedRunningTime="2026-01-28 15:59:11.021120086 +0000 UTC m=+2441.529290890" watchObservedRunningTime="2026-01-28 15:59:11.031834411 +0000 UTC m=+2441.540005225" Jan 28 15:59:15 crc kubenswrapper[4656]: I0128 15:59:15.966877 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:15 crc kubenswrapper[4656]: I0128 15:59:15.968903 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:16 crc kubenswrapper[4656]: I0128 15:59:16.015393 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:16 crc kubenswrapper[4656]: I0128 15:59:16.112958 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.016991 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.017628 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ghkdq" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="registry-server" containerID="cri-o://ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" gracePeriod=2 Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.570464 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.703988 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") pod \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.704229 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") pod \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.704309 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") pod \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\" (UID: \"b90e1d72-b1d7-4ea8-bb09-a3803e370d88\") " Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.705073 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities" (OuterVolumeSpecName: "utilities") pod "b90e1d72-b1d7-4ea8-bb09-a3803e370d88" (UID: "b90e1d72-b1d7-4ea8-bb09-a3803e370d88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.710010 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2" (OuterVolumeSpecName: "kube-api-access-4xvr2") pod "b90e1d72-b1d7-4ea8-bb09-a3803e370d88" (UID: "b90e1d72-b1d7-4ea8-bb09-a3803e370d88"). InnerVolumeSpecName "kube-api-access-4xvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.759979 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b90e1d72-b1d7-4ea8-bb09-a3803e370d88" (UID: "b90e1d72-b1d7-4ea8-bb09-a3803e370d88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.806835 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.807098 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xvr2\" (UniqueName: \"kubernetes.io/projected/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-kube-api-access-4xvr2\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4656]: I0128 15:59:20.807227 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90e1d72-b1d7-4ea8-bb09-a3803e370d88-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099225 4656 generic.go:334] "Generic (PLEG): container finished" podID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerID="ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" exitCode=0 Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099274 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerDied","Data":"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd"} Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099310 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghkdq" event={"ID":"b90e1d72-b1d7-4ea8-bb09-a3803e370d88","Type":"ContainerDied","Data":"ae17afe9e4ccf5678f100cd9b211c1cd96b324c3c4c05894852f3151b6e7cfcd"} Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099330 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghkdq" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.099355 4656 scope.go:117] "RemoveContainer" containerID="ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.122700 4656 scope.go:117] "RemoveContainer" containerID="dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.148417 4656 scope.go:117] "RemoveContainer" containerID="515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.151534 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.160925 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ghkdq"] Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.176268 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:21 crc kubenswrapper[4656]: E0128 15:59:21.176555 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.183684 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" path="/var/lib/kubelet/pods/b90e1d72-b1d7-4ea8-bb09-a3803e370d88/volumes" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.188790 4656 scope.go:117] "RemoveContainer" containerID="ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" Jan 28 15:59:21 crc kubenswrapper[4656]: E0128 15:59:21.189476 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd\": container with ID starting with ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd not found: ID does not exist" containerID="ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.189554 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd"} err="failed to get container status \"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd\": rpc error: code = NotFound desc = could not find container \"ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd\": container with ID starting with ce956979c7f3e1122664f9708a0c76093cdabc17b6354bb99cc49190e21c74dd not found: ID does not exist" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.189613 4656 scope.go:117] "RemoveContainer" containerID="dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe" Jan 28 15:59:21 crc kubenswrapper[4656]: E0128 15:59:21.189996 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe\": container with ID starting with dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe not found: ID does not exist" containerID="dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.190069 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe"} err="failed to get container status \"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe\": rpc error: code = NotFound desc = could not find container \"dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe\": container with ID starting with dd43e2a307de8a64e9278a093c3ef9af1af167402086c8bde4b7d8bb7107c8fe not found: ID does not exist" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.190137 4656 scope.go:117] "RemoveContainer" containerID="515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2" Jan 28 15:59:21 crc kubenswrapper[4656]: E0128 15:59:21.190529 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2\": container with ID starting with 515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2 not found: ID does not exist" containerID="515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2" Jan 28 15:59:21 crc kubenswrapper[4656]: I0128 15:59:21.190632 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2"} err="failed to get container status \"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2\": rpc error: code = NotFound desc = could not find container \"515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2\": container with ID starting with 515b86ad1a57e1ba60058fe594c2789ccefd0550904ac593e0e4ddca4c83d6b2 not found: ID does not exist" Jan 28 15:59:36 crc kubenswrapper[4656]: I0128 15:59:36.171272 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:36 crc kubenswrapper[4656]: E0128 15:59:36.172214 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:48 crc kubenswrapper[4656]: I0128 15:59:48.170730 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:48 crc kubenswrapper[4656]: E0128 15:59:48.171499 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 15:59:59 crc kubenswrapper[4656]: I0128 15:59:59.171037 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 15:59:59 crc kubenswrapper[4656]: E0128 15:59:59.171752 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.156343 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj"] Jan 28 16:00:00 crc kubenswrapper[4656]: E0128 16:00:00.157040 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="extract-utilities" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.157066 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="extract-utilities" Jan 28 16:00:00 crc kubenswrapper[4656]: E0128 16:00:00.157079 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.157084 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4656]: E0128 16:00:00.157125 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="extract-content" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.157131 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="extract-content" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.157358 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90e1d72-b1d7-4ea8-bb09-a3803e370d88" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.158013 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.163485 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.165303 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.172487 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.172592 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.172665 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.183623 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj"] Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.274227 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.274640 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.274755 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.275844 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.294076 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.297119 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") pod \"collect-profiles-29493600-hzhqj\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.482040 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:00 crc kubenswrapper[4656]: I0128 16:00:00.974995 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj"] Jan 28 16:00:01 crc kubenswrapper[4656]: I0128 16:00:01.459840 4656 generic.go:334] "Generic (PLEG): container finished" podID="a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" containerID="c14f95d55d05b9700f6dccf5984a4f92e913e739d483090d400925435e4b6614" exitCode=0 Jan 28 16:00:01 crc kubenswrapper[4656]: I0128 16:00:01.460003 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" event={"ID":"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c","Type":"ContainerDied","Data":"c14f95d55d05b9700f6dccf5984a4f92e913e739d483090d400925435e4b6614"} Jan 28 16:00:01 crc kubenswrapper[4656]: I0128 16:00:01.461499 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" event={"ID":"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c","Type":"ContainerStarted","Data":"6330a802e6a87da1777444fc0ea0e265505129329bad95ec85d5a46fa2ded2ba"} Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.813143 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.916915 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") pod \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.917057 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") pod \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.917153 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") pod \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\" (UID: \"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c\") " Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.918139 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" (UID: "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.923361 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg" (OuterVolumeSpecName: "kube-api-access-jpzrg") pod "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" (UID: "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c"). InnerVolumeSpecName "kube-api-access-jpzrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:00:02 crc kubenswrapper[4656]: I0128 16:00:02.924468 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" (UID: "a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.019033 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.019088 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.019098 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpzrg\" (UniqueName: \"kubernetes.io/projected/a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c-kube-api-access-jpzrg\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.480121 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" event={"ID":"a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c","Type":"ContainerDied","Data":"6330a802e6a87da1777444fc0ea0e265505129329bad95ec85d5a46fa2ded2ba"} Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.480193 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6330a802e6a87da1777444fc0ea0e265505129329bad95ec85d5a46fa2ded2ba" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.480269 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-hzhqj" Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.932865 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 16:00:03 crc kubenswrapper[4656]: I0128 16:00:03.939215 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-gvkjl"] Jan 28 16:00:05 crc kubenswrapper[4656]: I0128 16:00:05.180841 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25578c16-69f7-48c0-8a44-040950b9b8a1" path="/var/lib/kubelet/pods/25578c16-69f7-48c0-8a44-040950b9b8a1/volumes" Jan 28 16:00:14 crc kubenswrapper[4656]: I0128 16:00:14.170442 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:00:14 crc kubenswrapper[4656]: E0128 16:00:14.171422 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:00:25 crc kubenswrapper[4656]: I0128 16:00:25.171559 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:00:25 crc kubenswrapper[4656]: E0128 16:00:25.172301 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:00:33 crc kubenswrapper[4656]: I0128 16:00:33.558189 4656 scope.go:117] "RemoveContainer" containerID="8b5651c9577faa4a2a76aaa78d0b3751ec036612cc4cb26724879ce32d32d8f2" Jan 28 16:00:38 crc kubenswrapper[4656]: I0128 16:00:38.170329 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:00:38 crc kubenswrapper[4656]: E0128 16:00:38.171107 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:00:50 crc kubenswrapper[4656]: I0128 16:00:50.171001 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:00:50 crc kubenswrapper[4656]: E0128 16:00:50.171999 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:01:02 crc kubenswrapper[4656]: I0128 16:01:02.171250 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:01:02 crc kubenswrapper[4656]: E0128 16:01:02.172282 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:01:17 crc kubenswrapper[4656]: I0128 16:01:17.171821 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:01:17 crc kubenswrapper[4656]: E0128 16:01:17.172692 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:01:32 crc kubenswrapper[4656]: I0128 16:01:32.171545 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:01:32 crc kubenswrapper[4656]: E0128 16:01:32.172420 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:01:43 crc kubenswrapper[4656]: I0128 16:01:43.170350 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:01:44 crc kubenswrapper[4656]: I0128 16:01:44.295084 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8"} Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.500984 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:08 crc kubenswrapper[4656]: E0128 16:04:08.503838 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" containerName="collect-profiles" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.503965 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" containerName="collect-profiles" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.504545 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f45f92-7e7f-45d4-a88c-5a3ed7b31b8c" containerName="collect-profiles" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.512700 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.531263 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.686049 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.686583 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.686622 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.788506 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.788877 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.788966 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.789106 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.789400 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.818006 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") pod \"community-operators-ptqg4\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:08 crc kubenswrapper[4656]: I0128 16:04:08.857201 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:09 crc kubenswrapper[4656]: I0128 16:04:09.545447 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:09 crc kubenswrapper[4656]: I0128 16:04:09.732384 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerStarted","Data":"3e0008c7f5d57a93410fc946402619129d193b9d1b16135c815b5229c66c7258"} Jan 28 16:04:10 crc kubenswrapper[4656]: I0128 16:04:10.744305 4656 generic.go:334] "Generic (PLEG): container finished" podID="bfad1826-aa22-4f74-8370-00cf5c512680" containerID="905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758" exitCode=0 Jan 28 16:04:10 crc kubenswrapper[4656]: I0128 16:04:10.744353 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerDied","Data":"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758"} Jan 28 16:04:10 crc kubenswrapper[4656]: I0128 16:04:10.746966 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:04:11 crc kubenswrapper[4656]: I0128 16:04:11.264349 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:04:11 crc kubenswrapper[4656]: I0128 16:04:11.264697 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:04:12 crc kubenswrapper[4656]: I0128 16:04:12.760244 4656 generic.go:334] "Generic (PLEG): container finished" podID="bfad1826-aa22-4f74-8370-00cf5c512680" containerID="61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb" exitCode=0 Jan 28 16:04:12 crc kubenswrapper[4656]: I0128 16:04:12.760389 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerDied","Data":"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb"} Jan 28 16:04:13 crc kubenswrapper[4656]: I0128 16:04:13.772590 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerStarted","Data":"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c"} Jan 28 16:04:13 crc kubenswrapper[4656]: I0128 16:04:13.799109 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ptqg4" podStartSLOduration=3.096495383 podStartE2EDuration="5.799085758s" podCreationTimestamp="2026-01-28 16:04:08 +0000 UTC" firstStartedPulling="2026-01-28 16:04:10.746633348 +0000 UTC m=+2741.254804152" lastFinishedPulling="2026-01-28 16:04:13.449223713 +0000 UTC m=+2743.957394527" observedRunningTime="2026-01-28 16:04:13.792560371 +0000 UTC m=+2744.300731185" watchObservedRunningTime="2026-01-28 16:04:13.799085758 +0000 UTC m=+2744.307256562" Jan 28 16:04:18 crc kubenswrapper[4656]: I0128 16:04:18.858158 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:18 crc kubenswrapper[4656]: I0128 16:04:18.860205 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:18 crc kubenswrapper[4656]: I0128 16:04:18.905647 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:19 crc kubenswrapper[4656]: I0128 16:04:19.876457 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:19 crc kubenswrapper[4656]: I0128 16:04:19.930333 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:21 crc kubenswrapper[4656]: I0128 16:04:21.835211 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ptqg4" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="registry-server" containerID="cri-o://08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" gracePeriod=2 Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.507657 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.686394 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") pod \"bfad1826-aa22-4f74-8370-00cf5c512680\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.686491 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") pod \"bfad1826-aa22-4f74-8370-00cf5c512680\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.686529 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") pod \"bfad1826-aa22-4f74-8370-00cf5c512680\" (UID: \"bfad1826-aa22-4f74-8370-00cf5c512680\") " Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.687428 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities" (OuterVolumeSpecName: "utilities") pod "bfad1826-aa22-4f74-8370-00cf5c512680" (UID: "bfad1826-aa22-4f74-8370-00cf5c512680"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.699945 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h" (OuterVolumeSpecName: "kube-api-access-zfw7h") pod "bfad1826-aa22-4f74-8370-00cf5c512680" (UID: "bfad1826-aa22-4f74-8370-00cf5c512680"). InnerVolumeSpecName "kube-api-access-zfw7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.742893 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfad1826-aa22-4f74-8370-00cf5c512680" (UID: "bfad1826-aa22-4f74-8370-00cf5c512680"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.788392 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.788438 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfw7h\" (UniqueName: \"kubernetes.io/projected/bfad1826-aa22-4f74-8370-00cf5c512680-kube-api-access-zfw7h\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.788457 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfad1826-aa22-4f74-8370-00cf5c512680-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843634 4656 generic.go:334] "Generic (PLEG): container finished" podID="bfad1826-aa22-4f74-8370-00cf5c512680" containerID="08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" exitCode=0 Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843691 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerDied","Data":"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c"} Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843738 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptqg4" event={"ID":"bfad1826-aa22-4f74-8370-00cf5c512680","Type":"ContainerDied","Data":"3e0008c7f5d57a93410fc946402619129d193b9d1b16135c815b5229c66c7258"} Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843794 4656 scope.go:117] "RemoveContainer" containerID="08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.843977 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptqg4" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.867874 4656 scope.go:117] "RemoveContainer" containerID="61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.886024 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.909788 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ptqg4"] Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.917736 4656 scope.go:117] "RemoveContainer" containerID="905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.946691 4656 scope.go:117] "RemoveContainer" containerID="08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" Jan 28 16:04:22 crc kubenswrapper[4656]: E0128 16:04:22.947288 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c\": container with ID starting with 08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c not found: ID does not exist" containerID="08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.947335 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c"} err="failed to get container status \"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c\": rpc error: code = NotFound desc = could not find container \"08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c\": container with ID starting with 08f9ca5d534653a618b923b4f72e61f62c4f769029c62f8fbbe0e7718a91633c not found: ID does not exist" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.947360 4656 scope.go:117] "RemoveContainer" containerID="61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb" Jan 28 16:04:22 crc kubenswrapper[4656]: E0128 16:04:22.947719 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb\": container with ID starting with 61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb not found: ID does not exist" containerID="61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.947760 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb"} err="failed to get container status \"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb\": rpc error: code = NotFound desc = could not find container \"61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb\": container with ID starting with 61f39cc65751e240dcba2b09674a5306b20ef5f14a209257b826823fb4ea1efb not found: ID does not exist" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.947791 4656 scope.go:117] "RemoveContainer" containerID="905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758" Jan 28 16:04:22 crc kubenswrapper[4656]: E0128 16:04:22.948119 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758\": container with ID starting with 905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758 not found: ID does not exist" containerID="905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758" Jan 28 16:04:22 crc kubenswrapper[4656]: I0128 16:04:22.948147 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758"} err="failed to get container status \"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758\": rpc error: code = NotFound desc = could not find container \"905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758\": container with ID starting with 905ccdb443cb5f7a113b6370b6bb310220e327f8740d55181041807f55e7f758 not found: ID does not exist" Jan 28 16:04:23 crc kubenswrapper[4656]: I0128 16:04:23.183476 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" path="/var/lib/kubelet/pods/bfad1826-aa22-4f74-8370-00cf5c512680/volumes" Jan 28 16:04:41 crc kubenswrapper[4656]: I0128 16:04:41.264119 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:04:41 crc kubenswrapper[4656]: I0128 16:04:41.264728 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.263923 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.265731 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.265947 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.266843 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:05:11 crc kubenswrapper[4656]: I0128 16:05:11.267029 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8" gracePeriod=600 Jan 28 16:05:12 crc kubenswrapper[4656]: I0128 16:05:12.247141 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8" exitCode=0 Jan 28 16:05:12 crc kubenswrapper[4656]: I0128 16:05:12.247169 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8"} Jan 28 16:05:12 crc kubenswrapper[4656]: I0128 16:05:12.247674 4656 scope.go:117] "RemoveContainer" containerID="78094bac7d4159156e1aaa02e4f7057db862d74bb94931aa20dd209b708b8c6a" Jan 28 16:05:13 crc kubenswrapper[4656]: I0128 16:05:13.258572 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c"} Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653182 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:05:16 crc kubenswrapper[4656]: E0128 16:05:16.653686 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="registry-server" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653709 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="registry-server" Jan 28 16:05:16 crc kubenswrapper[4656]: E0128 16:05:16.653736 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="extract-content" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653743 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="extract-content" Jan 28 16:05:16 crc kubenswrapper[4656]: E0128 16:05:16.653756 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="extract-utilities" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653765 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="extract-utilities" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.653980 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfad1826-aa22-4f74-8370-00cf5c512680" containerName="registry-server" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.655451 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.667530 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.762731 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.762886 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.762955 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.864704 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.864801 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.864829 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.865515 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.865832 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.891368 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") pod \"redhat-operators-grfxk\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:16 crc kubenswrapper[4656]: I0128 16:05:16.982290 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:17 crc kubenswrapper[4656]: I0128 16:05:17.593860 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:05:18 crc kubenswrapper[4656]: I0128 16:05:18.298046 4656 generic.go:334] "Generic (PLEG): container finished" podID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerID="25707c1b7aa34daf8d1d203593fed712625cdedcf4f7a415d31a120483a3edfb" exitCode=0 Jan 28 16:05:18 crc kubenswrapper[4656]: I0128 16:05:18.298096 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerDied","Data":"25707c1b7aa34daf8d1d203593fed712625cdedcf4f7a415d31a120483a3edfb"} Jan 28 16:05:18 crc kubenswrapper[4656]: I0128 16:05:18.298147 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerStarted","Data":"f915ffe410661603fd5d6938c4d9be3708344da110936f2eccefe21a78fdbbad"} Jan 28 16:05:20 crc kubenswrapper[4656]: I0128 16:05:20.313847 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerStarted","Data":"38f722e1bfe839109981c0f0f0db4eff3febe78c06d24b651cd5373d7bc12663"} Jan 28 16:05:21 crc kubenswrapper[4656]: I0128 16:05:21.325217 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerDied","Data":"38f722e1bfe839109981c0f0f0db4eff3febe78c06d24b651cd5373d7bc12663"} Jan 28 16:05:21 crc kubenswrapper[4656]: I0128 16:05:21.325141 4656 generic.go:334] "Generic (PLEG): container finished" podID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerID="38f722e1bfe839109981c0f0f0db4eff3febe78c06d24b651cd5373d7bc12663" exitCode=0 Jan 28 16:05:29 crc kubenswrapper[4656]: I0128 16:05:29.417603 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerStarted","Data":"1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080"} Jan 28 16:05:29 crc kubenswrapper[4656]: I0128 16:05:29.441547 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-grfxk" podStartSLOduration=2.835280146 podStartE2EDuration="13.441530099s" podCreationTimestamp="2026-01-28 16:05:16 +0000 UTC" firstStartedPulling="2026-01-28 16:05:18.299811124 +0000 UTC m=+2808.807981928" lastFinishedPulling="2026-01-28 16:05:28.906061077 +0000 UTC m=+2819.414231881" observedRunningTime="2026-01-28 16:05:29.440932822 +0000 UTC m=+2819.949103636" watchObservedRunningTime="2026-01-28 16:05:29.441530099 +0000 UTC m=+2819.949700903" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.701326 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.704351 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.711043 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.891760 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.891936 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.892095 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993205 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993271 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993365 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993909 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:34 crc kubenswrapper[4656]: I0128 16:05:34.993906 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:35 crc kubenswrapper[4656]: I0128 16:05:35.011050 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") pod \"redhat-marketplace-bq6jw\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:35 crc kubenswrapper[4656]: I0128 16:05:35.021733 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:35 crc kubenswrapper[4656]: I0128 16:05:35.464690 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.470600 4656 generic.go:334] "Generic (PLEG): container finished" podID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerID="0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e" exitCode=0 Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.470663 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerDied","Data":"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e"} Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.470705 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerStarted","Data":"02e879378d671726794bd95864aec6b2f37302998154d98cac6b76633c12877a"} Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.984939 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:36 crc kubenswrapper[4656]: I0128 16:05:36.985281 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:05:38 crc kubenswrapper[4656]: I0128 16:05:38.033829 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:05:38 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:05:38 crc kubenswrapper[4656]: > Jan 28 16:05:38 crc kubenswrapper[4656]: I0128 16:05:38.487503 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerStarted","Data":"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d"} Jan 28 16:05:39 crc kubenswrapper[4656]: I0128 16:05:39.565241 4656 generic.go:334] "Generic (PLEG): container finished" podID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerID="7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d" exitCode=0 Jan 28 16:05:39 crc kubenswrapper[4656]: I0128 16:05:39.565336 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerDied","Data":"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d"} Jan 28 16:05:41 crc kubenswrapper[4656]: I0128 16:05:41.591392 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerStarted","Data":"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67"} Jan 28 16:05:41 crc kubenswrapper[4656]: I0128 16:05:41.619348 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bq6jw" podStartSLOduration=2.828878261 podStartE2EDuration="7.619244708s" podCreationTimestamp="2026-01-28 16:05:34 +0000 UTC" firstStartedPulling="2026-01-28 16:05:36.472662515 +0000 UTC m=+2826.980833319" lastFinishedPulling="2026-01-28 16:05:41.263028962 +0000 UTC m=+2831.771199766" observedRunningTime="2026-01-28 16:05:41.618305611 +0000 UTC m=+2832.126476435" watchObservedRunningTime="2026-01-28 16:05:41.619244708 +0000 UTC m=+2832.127415512" Jan 28 16:05:45 crc kubenswrapper[4656]: I0128 16:05:45.022812 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:45 crc kubenswrapper[4656]: I0128 16:05:45.023363 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:45 crc kubenswrapper[4656]: I0128 16:05:45.064241 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:48 crc kubenswrapper[4656]: I0128 16:05:48.024535 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:05:48 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:05:48 crc kubenswrapper[4656]: > Jan 28 16:05:55 crc kubenswrapper[4656]: I0128 16:05:55.067403 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:55 crc kubenswrapper[4656]: I0128 16:05:55.122327 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:55 crc kubenswrapper[4656]: I0128 16:05:55.699117 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bq6jw" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="registry-server" containerID="cri-o://255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" gracePeriod=2 Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.170929 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.329358 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") pod \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.329769 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") pod \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.329930 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") pod \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\" (UID: \"8c42606f-0266-40c9-a1b1-9e83f56f6ff9\") " Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.335336 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities" (OuterVolumeSpecName: "utilities") pod "8c42606f-0266-40c9-a1b1-9e83f56f6ff9" (UID: "8c42606f-0266-40c9-a1b1-9e83f56f6ff9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.339155 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8" (OuterVolumeSpecName: "kube-api-access-h7js8") pod "8c42606f-0266-40c9-a1b1-9e83f56f6ff9" (UID: "8c42606f-0266-40c9-a1b1-9e83f56f6ff9"). InnerVolumeSpecName "kube-api-access-h7js8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.352576 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c42606f-0266-40c9-a1b1-9e83f56f6ff9" (UID: "8c42606f-0266-40c9-a1b1-9e83f56f6ff9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.431523 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7js8\" (UniqueName: \"kubernetes.io/projected/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-kube-api-access-h7js8\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.431586 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.431611 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c42606f-0266-40c9-a1b1-9e83f56f6ff9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775559 4656 generic.go:334] "Generic (PLEG): container finished" podID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerID="255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" exitCode=0 Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775624 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerDied","Data":"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67"} Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775678 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq6jw" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775695 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq6jw" event={"ID":"8c42606f-0266-40c9-a1b1-9e83f56f6ff9","Type":"ContainerDied","Data":"02e879378d671726794bd95864aec6b2f37302998154d98cac6b76633c12877a"} Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.775798 4656 scope.go:117] "RemoveContainer" containerID="255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.831044 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.839368 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq6jw"] Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.841900 4656 scope.go:117] "RemoveContainer" containerID="7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.868360 4656 scope.go:117] "RemoveContainer" containerID="0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.901644 4656 scope.go:117] "RemoveContainer" containerID="255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" Jan 28 16:05:56 crc kubenswrapper[4656]: E0128 16:05:56.902137 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67\": container with ID starting with 255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67 not found: ID does not exist" containerID="255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902247 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67"} err="failed to get container status \"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67\": rpc error: code = NotFound desc = could not find container \"255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67\": container with ID starting with 255e70b6efc11a0197226206d236a0c3a169b6928cd2dbcfb7b1c4eeb9bbec67 not found: ID does not exist" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902282 4656 scope.go:117] "RemoveContainer" containerID="7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d" Jan 28 16:05:56 crc kubenswrapper[4656]: E0128 16:05:56.902587 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d\": container with ID starting with 7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d not found: ID does not exist" containerID="7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902610 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d"} err="failed to get container status \"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d\": rpc error: code = NotFound desc = could not find container \"7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d\": container with ID starting with 7f2521b4b967bf2985ff0896141dfafdae4e215f1686cf9c33b97427a8c0776d not found: ID does not exist" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902623 4656 scope.go:117] "RemoveContainer" containerID="0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e" Jan 28 16:05:56 crc kubenswrapper[4656]: E0128 16:05:56.902837 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e\": container with ID starting with 0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e not found: ID does not exist" containerID="0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e" Jan 28 16:05:56 crc kubenswrapper[4656]: I0128 16:05:56.902863 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e"} err="failed to get container status \"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e\": rpc error: code = NotFound desc = could not find container \"0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e\": container with ID starting with 0a97246b50cb6678db132a72277156be01db4bff61c62969fdf36301b0859f0e not found: ID does not exist" Jan 28 16:05:57 crc kubenswrapper[4656]: I0128 16:05:57.181584 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" path="/var/lib/kubelet/pods/8c42606f-0266-40c9-a1b1-9e83f56f6ff9/volumes" Jan 28 16:05:58 crc kubenswrapper[4656]: I0128 16:05:58.035635 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:05:58 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:05:58 crc kubenswrapper[4656]: > Jan 28 16:06:08 crc kubenswrapper[4656]: I0128 16:06:08.025544 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:06:08 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:06:08 crc kubenswrapper[4656]: > Jan 28 16:06:18 crc kubenswrapper[4656]: I0128 16:06:18.034373 4656 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" probeResult="failure" output=< Jan 28 16:06:18 crc kubenswrapper[4656]: timeout: failed to connect service ":50051" within 1s Jan 28 16:06:18 crc kubenswrapper[4656]: > Jan 28 16:06:27 crc kubenswrapper[4656]: I0128 16:06:27.031139 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:06:27 crc kubenswrapper[4656]: I0128 16:06:27.085638 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:06:27 crc kubenswrapper[4656]: I0128 16:06:27.278887 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:06:28 crc kubenswrapper[4656]: I0128 16:06:28.150671 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-grfxk" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" containerID="cri-o://1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080" gracePeriod=2 Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.160653 4656 generic.go:334] "Generic (PLEG): container finished" podID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerID="1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080" exitCode=0 Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.160856 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerDied","Data":"1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080"} Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.161059 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grfxk" event={"ID":"b576d819-1f74-45a6-a12f-8cb6f88900a2","Type":"ContainerDied","Data":"f915ffe410661603fd5d6938c4d9be3708344da110936f2eccefe21a78fdbbad"} Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.161104 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f915ffe410661603fd5d6938c4d9be3708344da110936f2eccefe21a78fdbbad" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.184382 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.244356 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") pod \"b576d819-1f74-45a6-a12f-8cb6f88900a2\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.244454 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") pod \"b576d819-1f74-45a6-a12f-8cb6f88900a2\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.244495 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") pod \"b576d819-1f74-45a6-a12f-8cb6f88900a2\" (UID: \"b576d819-1f74-45a6-a12f-8cb6f88900a2\") " Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.245891 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities" (OuterVolumeSpecName: "utilities") pod "b576d819-1f74-45a6-a12f-8cb6f88900a2" (UID: "b576d819-1f74-45a6-a12f-8cb6f88900a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.251465 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk" (OuterVolumeSpecName: "kube-api-access-gn2jk") pod "b576d819-1f74-45a6-a12f-8cb6f88900a2" (UID: "b576d819-1f74-45a6-a12f-8cb6f88900a2"). InnerVolumeSpecName "kube-api-access-gn2jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.347344 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.347388 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn2jk\" (UniqueName: \"kubernetes.io/projected/b576d819-1f74-45a6-a12f-8cb6f88900a2-kube-api-access-gn2jk\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.360300 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b576d819-1f74-45a6-a12f-8cb6f88900a2" (UID: "b576d819-1f74-45a6-a12f-8cb6f88900a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:29 crc kubenswrapper[4656]: I0128 16:06:29.449131 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b576d819-1f74-45a6-a12f-8cb6f88900a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:30 crc kubenswrapper[4656]: I0128 16:06:30.170645 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grfxk" Jan 28 16:06:30 crc kubenswrapper[4656]: I0128 16:06:30.209920 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:06:30 crc kubenswrapper[4656]: I0128 16:06:30.229579 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-grfxk"] Jan 28 16:06:31 crc kubenswrapper[4656]: I0128 16:06:31.181996 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" path="/var/lib/kubelet/pods/b576d819-1f74-45a6-a12f-8cb6f88900a2/volumes" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.198312 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199424 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="extract-content" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199450 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="extract-content" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199474 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="extract-content" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199481 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="extract-content" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199495 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="extract-utilities" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199503 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="extract-utilities" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199527 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199537 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199553 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199560 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: E0128 16:07:24.199572 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="extract-utilities" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199581 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="extract-utilities" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199845 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c42606f-0266-40c9-a1b1-9e83f56f6ff9" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.199877 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="b576d819-1f74-45a6-a12f-8cb6f88900a2" containerName="registry-server" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.201052 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.214594 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5j5mh"/"openshift-service-ca.crt" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.224344 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5j5mh"/"kube-root-ca.crt" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.312848 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.312942 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.343370 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.423326 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.423384 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.423867 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.456138 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") pod \"must-gather-llsvb\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:24 crc kubenswrapper[4656]: I0128 16:07:24.525652 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:07:25 crc kubenswrapper[4656]: I0128 16:07:25.076507 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:07:25 crc kubenswrapper[4656]: W0128 16:07:25.095660 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c1ba1d6_2c0d_4263_a2ae_a744f60f89b9.slice/crio-e68b2490cb51a7d99ed49ed4724ef0b0282616f5a2b4dc83da0d0d6727a62f4f WatchSource:0}: Error finding container e68b2490cb51a7d99ed49ed4724ef0b0282616f5a2b4dc83da0d0d6727a62f4f: Status 404 returned error can't find the container with id e68b2490cb51a7d99ed49ed4724ef0b0282616f5a2b4dc83da0d0d6727a62f4f Jan 28 16:07:25 crc kubenswrapper[4656]: I0128 16:07:25.882139 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/must-gather-llsvb" event={"ID":"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9","Type":"ContainerStarted","Data":"e68b2490cb51a7d99ed49ed4724ef0b0282616f5a2b4dc83da0d0d6727a62f4f"} Jan 28 16:07:34 crc kubenswrapper[4656]: I0128 16:07:34.962962 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/must-gather-llsvb" event={"ID":"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9","Type":"ContainerStarted","Data":"f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea"} Jan 28 16:07:34 crc kubenswrapper[4656]: I0128 16:07:34.963573 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/must-gather-llsvb" event={"ID":"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9","Type":"ContainerStarted","Data":"6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b"} Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.353098 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mp6g6"] Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.354883 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.358829 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5j5mh"/"default-dockercfg-5hbgb" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.469837 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.470137 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.572007 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.572118 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.572336 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.592510 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") pod \"crc-debug-mp6g6\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.675330 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:07:35 crc kubenswrapper[4656]: W0128 16:07:35.717316 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a9db18e_2297_4236_b517_de3b26ead9a1.slice/crio-9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a WatchSource:0}: Error finding container 9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a: Status 404 returned error can't find the container with id 9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a Jan 28 16:07:35 crc kubenswrapper[4656]: I0128 16:07:35.975078 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" event={"ID":"4a9db18e-2297-4236-b517-de3b26ead9a1","Type":"ContainerStarted","Data":"9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a"} Jan 28 16:07:36 crc kubenswrapper[4656]: I0128 16:07:36.002781 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5j5mh/must-gather-llsvb" podStartSLOduration=2.713314524 podStartE2EDuration="12.002751781s" podCreationTimestamp="2026-01-28 16:07:24 +0000 UTC" firstStartedPulling="2026-01-28 16:07:25.120622619 +0000 UTC m=+2935.628793423" lastFinishedPulling="2026-01-28 16:07:34.410059876 +0000 UTC m=+2944.918230680" observedRunningTime="2026-01-28 16:07:35.994064822 +0000 UTC m=+2946.502235636" watchObservedRunningTime="2026-01-28 16:07:36.002751781 +0000 UTC m=+2946.510922585" Jan 28 16:07:41 crc kubenswrapper[4656]: I0128 16:07:41.264070 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:07:41 crc kubenswrapper[4656]: I0128 16:07:41.264833 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:07:51 crc kubenswrapper[4656]: E0128 16:07:51.165343 4656 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 28 16:07:51 crc kubenswrapper[4656]: E0128 16:07:51.167611 4656 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-mp6g6_openshift-must-gather-5j5mh(4a9db18e-2297-4236-b517-de3b26ead9a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:07:51 crc kubenswrapper[4656]: E0128 16:07:51.169093 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" Jan 28 16:07:52 crc kubenswrapper[4656]: E0128 16:07:52.146264 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" Jan 28 16:08:06 crc kubenswrapper[4656]: I0128 16:08:06.339983 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" event={"ID":"4a9db18e-2297-4236-b517-de3b26ead9a1","Type":"ContainerStarted","Data":"56f14a3cf7039489472a1e9af12718c79dc3de07799d984f8c3f92c6f2afe477"} Jan 28 16:08:11 crc kubenswrapper[4656]: I0128 16:08:11.264086 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:08:11 crc kubenswrapper[4656]: I0128 16:08:11.265946 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:08:28 crc kubenswrapper[4656]: I0128 16:08:28.655162 4656 generic.go:334] "Generic (PLEG): container finished" podID="4a9db18e-2297-4236-b517-de3b26ead9a1" containerID="56f14a3cf7039489472a1e9af12718c79dc3de07799d984f8c3f92c6f2afe477" exitCode=0 Jan 28 16:08:28 crc kubenswrapper[4656]: I0128 16:08:28.655318 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" event={"ID":"4a9db18e-2297-4236-b517-de3b26ead9a1","Type":"ContainerDied","Data":"56f14a3cf7039489472a1e9af12718c79dc3de07799d984f8c3f92c6f2afe477"} Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.772180 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.825398 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mp6g6"] Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.843681 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mp6g6"] Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.938472 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") pod \"4a9db18e-2297-4236-b517-de3b26ead9a1\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.938694 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") pod \"4a9db18e-2297-4236-b517-de3b26ead9a1\" (UID: \"4a9db18e-2297-4236-b517-de3b26ead9a1\") " Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.939098 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host" (OuterVolumeSpecName: "host") pod "4a9db18e-2297-4236-b517-de3b26ead9a1" (UID: "4a9db18e-2297-4236-b517-de3b26ead9a1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:08:29 crc kubenswrapper[4656]: I0128 16:08:29.948450 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459" (OuterVolumeSpecName: "kube-api-access-bk459") pod "4a9db18e-2297-4236-b517-de3b26ead9a1" (UID: "4a9db18e-2297-4236-b517-de3b26ead9a1"). InnerVolumeSpecName "kube-api-access-bk459". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:30 crc kubenswrapper[4656]: I0128 16:08:30.040760 4656 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a9db18e-2297-4236-b517-de3b26ead9a1-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:30 crc kubenswrapper[4656]: I0128 16:08:30.041075 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk459\" (UniqueName: \"kubernetes.io/projected/4a9db18e-2297-4236-b517-de3b26ead9a1-kube-api-access-bk459\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:30 crc kubenswrapper[4656]: I0128 16:08:30.680655 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a047c77aa7202ec9ac0a0cb49e1b2a8cd2c54d8a23582ab741c2f75fabdb82a" Jan 28 16:08:30 crc kubenswrapper[4656]: I0128 16:08:30.680737 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mp6g6" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.122283 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mt444"] Jan 28 16:08:31 crc kubenswrapper[4656]: E0128 16:08:31.122885 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" containerName="container-00" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.122904 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" containerName="container-00" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.123189 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" containerName="container-00" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.124113 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.127972 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5j5mh"/"default-dockercfg-5hbgb" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.205571 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9db18e-2297-4236-b517-de3b26ead9a1" path="/var/lib/kubelet/pods/4a9db18e-2297-4236-b517-de3b26ead9a1/volumes" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.283146 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.283298 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.384632 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.384774 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.384886 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.420300 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") pod \"crc-debug-mt444\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.450597 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:31 crc kubenswrapper[4656]: I0128 16:08:31.699944 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mt444" event={"ID":"4db035db-0355-474f-8b0a-901729dd209b","Type":"ContainerStarted","Data":"446b08d4835c835ff17f9da2e83f5afa02d465d3ecd7cd2efc5548f3f6362321"} Jan 28 16:08:32 crc kubenswrapper[4656]: I0128 16:08:32.710143 4656 generic.go:334] "Generic (PLEG): container finished" podID="4db035db-0355-474f-8b0a-901729dd209b" containerID="fbc64e0a3e9a4ebf2ebec5c723e100b57156e1f3139bab5f6fa4f521a05d3cd1" exitCode=1 Jan 28 16:08:32 crc kubenswrapper[4656]: I0128 16:08:32.710244 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/crc-debug-mt444" event={"ID":"4db035db-0355-474f-8b0a-901729dd209b","Type":"ContainerDied","Data":"fbc64e0a3e9a4ebf2ebec5c723e100b57156e1f3139bab5f6fa4f521a05d3cd1"} Jan 28 16:08:32 crc kubenswrapper[4656]: I0128 16:08:32.767854 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mt444"] Jan 28 16:08:32 crc kubenswrapper[4656]: I0128 16:08:32.777905 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5j5mh/crc-debug-mt444"] Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.816736 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.857702 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") pod \"4db035db-0355-474f-8b0a-901729dd209b\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.878123 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8" (OuterVolumeSpecName: "kube-api-access-l8pr8") pod "4db035db-0355-474f-8b0a-901729dd209b" (UID: "4db035db-0355-474f-8b0a-901729dd209b"). InnerVolumeSpecName "kube-api-access-l8pr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.959205 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") pod \"4db035db-0355-474f-8b0a-901729dd209b\" (UID: \"4db035db-0355-474f-8b0a-901729dd209b\") " Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.959680 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8pr8\" (UniqueName: \"kubernetes.io/projected/4db035db-0355-474f-8b0a-901729dd209b-kube-api-access-l8pr8\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:33 crc kubenswrapper[4656]: I0128 16:08:33.959720 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host" (OuterVolumeSpecName: "host") pod "4db035db-0355-474f-8b0a-901729dd209b" (UID: "4db035db-0355-474f-8b0a-901729dd209b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:08:34 crc kubenswrapper[4656]: I0128 16:08:34.061340 4656 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4db035db-0355-474f-8b0a-901729dd209b-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:34 crc kubenswrapper[4656]: I0128 16:08:34.734792 4656 scope.go:117] "RemoveContainer" containerID="fbc64e0a3e9a4ebf2ebec5c723e100b57156e1f3139bab5f6fa4f521a05d3cd1" Jan 28 16:08:34 crc kubenswrapper[4656]: I0128 16:08:34.734811 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/crc-debug-mt444" Jan 28 16:08:35 crc kubenswrapper[4656]: I0128 16:08:35.180283 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4db035db-0355-474f-8b0a-901729dd209b" path="/var/lib/kubelet/pods/4db035db-0355-474f-8b0a-901729dd209b/volumes" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.263712 4656 patch_prober.go:28] interesting pod/machine-config-daemon-8llkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.264150 4656 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.264211 4656 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.264866 4656 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c"} pod="openshift-machine-config-operator/machine-config-daemon-8llkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.264919 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerName="machine-config-daemon" containerID="cri-o://75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" gracePeriod=600 Jan 28 16:08:41 crc kubenswrapper[4656]: E0128 16:08:41.583856 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.820827 4656 generic.go:334] "Generic (PLEG): container finished" podID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" exitCode=0 Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.820878 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerDied","Data":"75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c"} Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.820916 4656 scope.go:117] "RemoveContainer" containerID="11608f8e32efc5c69606e8eda1ef5dc362329c760e4cf15465bd4b124ca420f8" Jan 28 16:08:41 crc kubenswrapper[4656]: I0128 16:08:41.821817 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:08:41 crc kubenswrapper[4656]: E0128 16:08:41.822223 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:08:46 crc kubenswrapper[4656]: I0128 16:08:46.983655 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74f6bcbc87-8fng8_0d7bd9aa-43b5-4819-9bef-a61574670ba6/init/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.140870 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74f6bcbc87-8fng8_0d7bd9aa-43b5-4819-9bef-a61574670ba6/init/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.257780 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7eff8e04-7afc-4a92-998f-db692ece65e7/kube-state-metrics/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.272801 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74f6bcbc87-8fng8_0d7bd9aa-43b5-4819-9bef-a61574670ba6/dnsmasq-dns/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.603244 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f25455cb-6f99-4958-b7bd-9fa56e45f6e1/memcached/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.673033 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6a46bc21-63f0-461d-b33d-ec98cb059408/mysql-bootstrap/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.881000 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6a46bc21-63f0-461d-b33d-ec98cb059408/mysql-bootstrap/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.919433 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6a46bc21-63f0-461d-b33d-ec98cb059408/galera/0.log" Jan 28 16:08:47 crc kubenswrapper[4656]: I0128 16:08:47.920097 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e41d89b-8943-4aec-9e33-00db569a2ce8/mysql-bootstrap/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.217653 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e41d89b-8943-4aec-9e33-00db569a2ce8/galera/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.244351 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8e41d89b-8943-4aec-9e33-00db569a2ce8/mysql-bootstrap/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.255549 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7pppv_4815f130-4106-456b-9bcb-b34536d9ddc9/ovn-controller/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.485798 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vs8p5_f39f654e-78ca-44c2-8c6a-a1de43a83d3f/openstack-network-exporter/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.608002 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-28hwk_beab0392-2167-4283-97ae-12498c5d02c1/ovsdb-server-init/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.784848 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-28hwk_beab0392-2167-4283-97ae-12498c5d02c1/ovsdb-server-init/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.835894 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-28hwk_beab0392-2167-4283-97ae-12498c5d02c1/ovs-vswitchd/0.log" Jan 28 16:08:48 crc kubenswrapper[4656]: I0128 16:08:48.890225 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-28hwk_beab0392-2167-4283-97ae-12498c5d02c1/ovsdb-server/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.059318 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2fd90425-2113-4787-b18d-332f32cedd87/ovn-northd/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.067264 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2fd90425-2113-4787-b18d-332f32cedd87/openstack-network-exporter/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.404668 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_681fa692-9a54-4d03-a31c-952409143c4f/openstack-network-exporter/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.440154 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_681fa692-9a54-4d03-a31c-952409143c4f/ovsdbserver-nb/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.577287 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_da949f76-8013-4824-bda9-0656b43920b5/openstack-network-exporter/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.643124 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_da949f76-8013-4824-bda9-0656b43920b5/ovsdbserver-sb/0.log" Jan 28 16:08:49 crc kubenswrapper[4656]: I0128 16:08:49.839121 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_07f26e32-4b43-4591-9ed2-6426a96e596e/setup-container/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.034739 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_07f26e32-4b43-4591-9ed2-6426a96e596e/setup-container/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.068698 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2239f1cd-f384-40df-9f71-a46caf290038/setup-container/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.138470 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_07f26e32-4b43-4591-9ed2-6426a96e596e/rabbitmq/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.441167 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2239f1cd-f384-40df-9f71-a46caf290038/setup-container/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.450571 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2239f1cd-f384-40df-9f71-a46caf290038/rabbitmq/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.511761 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-mfbzm_5db57b48-1e29-4c73-b488-d6998232fce1/swift-ring-rebalance/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.753352 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/account-auditor/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.808122 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/account-reaper/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.847021 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/account-replicator/0.log" Jan 28 16:08:50 crc kubenswrapper[4656]: I0128 16:08:50.981005 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/account-server/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.090243 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/container-auditor/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.130061 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/container-server/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.147503 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/container-replicator/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.240093 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/container-updater/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.337463 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-auditor/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.390247 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-expirer/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.448568 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-replicator/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.549491 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-server/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.601914 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/object-updater/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.652053 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/rsync/0.log" Jan 28 16:08:51 crc kubenswrapper[4656]: I0128 16:08:51.733423 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_19a7b52a-dfe9-47b0-818e-48752d76068e/swift-recon-cron/0.log" Jan 28 16:08:57 crc kubenswrapper[4656]: I0128 16:08:57.171330 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:08:57 crc kubenswrapper[4656]: E0128 16:08:57.172384 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:10 crc kubenswrapper[4656]: I0128 16:09:10.170747 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:09:10 crc kubenswrapper[4656]: E0128 16:09:10.171734 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:14 crc kubenswrapper[4656]: I0128 16:09:14.579726 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6bc7f4f4cf-hd57q_6ce4cdbc-3227-4679-8da9-9fd537996bd7/manager/0.log" Jan 28 16:09:14 crc kubenswrapper[4656]: I0128 16:09:14.832363 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/util/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.085570 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/util/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.116925 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/pull/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.162080 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/pull/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.293810 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/util/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.358579 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/extract/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.438643 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ced7e1ada3abf6b3b63db7d20c8ec826fce876df6438a829033802f9bd4hzwq_a11dcdce-e6bc-48a2-b273-3755e5aee495/pull/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.563366 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-f6487bd57-jwv7f_0cf0a4ad-85dd-47df-9307-e469f075a098/manager/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.702887 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66dfbd6f5d-r8cjw_cfeab083-1268-47aa-938e-bd91036755de/manager/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.887620 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6db5dbd896-cfpjq_113ba11f-aeba-4710-b5f6-0991e9766d45/manager/0.log" Jan 28 16:09:15 crc kubenswrapper[4656]: I0128 16:09:15.933614 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-587c6bfdcf-xjnqt_45be18b4-f249-4c09-8875-9959686d7f8f/manager/0.log" Jan 28 16:09:16 crc kubenswrapper[4656]: I0128 16:09:16.194973 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-9q5lw_1bfa2d1e-9ab0-478a-a19d-d031a1a8a312/manager/0.log" Jan 28 16:09:16 crc kubenswrapper[4656]: I0128 16:09:16.263890 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-bfl2p_7341d49c-e9a9-4108-8a2c-bf808ccb49cf/manager/0.log" Jan 28 16:09:16 crc kubenswrapper[4656]: I0128 16:09:16.639233 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-958664b5-m9jtk_ae47e69a-49f4-4b1a-8d68-068b5e99f22a/manager/0.log" Jan 28 16:09:16 crc kubenswrapper[4656]: I0128 16:09:16.720885 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b84b46695-86ht2_a5bdaf78-b590-429f-bc9b-46c67a369456/manager/0.log" Jan 28 16:09:17 crc kubenswrapper[4656]: I0128 16:09:17.742039 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-p92zm_9277e421-df3a-49a2-81cc-86d0f7c65809/manager/0.log" Jan 28 16:09:17 crc kubenswrapper[4656]: I0128 16:09:17.786799 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-765668569f-7kctj_0a83428f-312c-4590-beb3-8da4994c8951/manager/0.log" Jan 28 16:09:18 crc kubenswrapper[4656]: I0128 16:09:18.210092 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-694c5bfc85-rjfbj_9954b0be-71f8-430b-a61f-28a95404c0f7/manager/0.log" Jan 28 16:09:18 crc kubenswrapper[4656]: I0128 16:09:18.269613 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-gqr2d_f37006c8-da19-4d17-a6d5-f4b075f2220f/manager/0.log" Jan 28 16:09:18 crc kubenswrapper[4656]: I0128 16:09:18.446031 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5c765b4558-wjspj_50db0152-72c0-4fc3-9cd5-6b2c01127341/manager/0.log" Jan 28 16:09:18 crc kubenswrapper[4656]: I0128 16:09:18.745642 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4df55nv_3dcf45d4-628c-4071-b732-8ade2d3c4b4e/manager/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.320483 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-678d9cfb88-c5xvb_ab5fdcdc-7606-4e97-a65a-c98545c1a74a/operator/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.421537 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tknhq_c65965be-4267-4c92-a9b1-046d85299b2c/registry-server/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.422992 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57d89bf95c-gltwn_010cc4f5-4ac8-46e0-be08-80218981003e/manager/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.727073 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-brxps_92d1569e-5733-4779-b9fb-7feae2ea9317/manager/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.739075 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-rmvr2_132d53b6-84ec-44d6-8f8f-762e9595919e/manager/0.log" Jan 28 16:09:19 crc kubenswrapper[4656]: I0128 16:09:19.981117 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-sqgs8_0bb42d6d-259a-4532-b3e2-732c0f271d9a/operator/0.log" Jan 28 16:09:20 crc kubenswrapper[4656]: I0128 16:09:20.120978 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-9q9vg_e97e04fa-1b66-4373-b31f-12089f1f5b2b/manager/0.log" Jan 28 16:09:20 crc kubenswrapper[4656]: I0128 16:09:20.329827 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-bxkwv_d903ea5b-f13e-43d5-b65b-44093c70ddee/manager/0.log" Jan 28 16:09:20 crc kubenswrapper[4656]: I0128 16:09:20.441638 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d69b9c5db-nmjz8_5888a906-8758-4179-a30f-c2244ec46072/manager/0.log" Jan 28 16:09:20 crc kubenswrapper[4656]: I0128 16:09:20.531673 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-xlrqs_36524b9c-daa2-46d2-a732-b0964bb08873/manager/0.log" Jan 28 16:09:25 crc kubenswrapper[4656]: I0128 16:09:25.171920 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:09:25 crc kubenswrapper[4656]: E0128 16:09:25.172644 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.720500 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:29 crc kubenswrapper[4656]: E0128 16:09:29.722923 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db035db-0355-474f-8b0a-901729dd209b" containerName="container-00" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.722968 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db035db-0355-474f-8b0a-901729dd209b" containerName="container-00" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.723328 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db035db-0355-474f-8b0a-901729dd209b" containerName="container-00" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.724744 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.757627 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.875321 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.875702 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.875766 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.977929 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.978003 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.978057 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.978790 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:29 crc kubenswrapper[4656]: I0128 16:09:29.979015 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:30 crc kubenswrapper[4656]: I0128 16:09:30.017417 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") pod \"certified-operators-b8c2w\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:30 crc kubenswrapper[4656]: I0128 16:09:30.051191 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:30 crc kubenswrapper[4656]: I0128 16:09:30.616621 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:31 crc kubenswrapper[4656]: I0128 16:09:31.332723 4656 generic.go:334] "Generic (PLEG): container finished" podID="d76256ac-0d22-49d2-ba57-036ac864356f" containerID="340c910a26d3bd13fb9584ce8111f10337cb97a436c25e510aece355a5a9e46f" exitCode=0 Jan 28 16:09:31 crc kubenswrapper[4656]: I0128 16:09:31.332799 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerDied","Data":"340c910a26d3bd13fb9584ce8111f10337cb97a436c25e510aece355a5a9e46f"} Jan 28 16:09:31 crc kubenswrapper[4656]: I0128 16:09:31.333043 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerStarted","Data":"1aa788840bf60cd11879a71864d4bb1a7ebaf205bb904dfe6c72a639dfb0bf8d"} Jan 28 16:09:31 crc kubenswrapper[4656]: I0128 16:09:31.336063 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:09:32 crc kubenswrapper[4656]: I0128 16:09:32.342808 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerStarted","Data":"f9d30d085cbc81d644ccb39179effeb6ce91c140b8c38d8e3a58cc5acacebd06"} Jan 28 16:09:33 crc kubenswrapper[4656]: I0128 16:09:33.353865 4656 generic.go:334] "Generic (PLEG): container finished" podID="d76256ac-0d22-49d2-ba57-036ac864356f" containerID="f9d30d085cbc81d644ccb39179effeb6ce91c140b8c38d8e3a58cc5acacebd06" exitCode=0 Jan 28 16:09:33 crc kubenswrapper[4656]: I0128 16:09:33.354040 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerDied","Data":"f9d30d085cbc81d644ccb39179effeb6ce91c140b8c38d8e3a58cc5acacebd06"} Jan 28 16:09:35 crc kubenswrapper[4656]: I0128 16:09:35.372909 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerStarted","Data":"315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724"} Jan 28 16:09:35 crc kubenswrapper[4656]: I0128 16:09:35.401367 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b8c2w" podStartSLOduration=3.31007837 podStartE2EDuration="6.40133712s" podCreationTimestamp="2026-01-28 16:09:29 +0000 UTC" firstStartedPulling="2026-01-28 16:09:31.335690816 +0000 UTC m=+3061.843861620" lastFinishedPulling="2026-01-28 16:09:34.426949566 +0000 UTC m=+3064.935120370" observedRunningTime="2026-01-28 16:09:35.400530027 +0000 UTC m=+3065.908700841" watchObservedRunningTime="2026-01-28 16:09:35.40133712 +0000 UTC m=+3065.909507914" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.052414 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.053000 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.104885 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.210107 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:09:40 crc kubenswrapper[4656]: E0128 16:09:40.210907 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.527954 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:40 crc kubenswrapper[4656]: I0128 16:09:40.601417 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:42 crc kubenswrapper[4656]: I0128 16:09:42.496002 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b8c2w" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="registry-server" containerID="cri-o://315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724" gracePeriod=2 Jan 28 16:09:44 crc kubenswrapper[4656]: I0128 16:09:44.528881 4656 generic.go:334] "Generic (PLEG): container finished" podID="d76256ac-0d22-49d2-ba57-036ac864356f" containerID="315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724" exitCode=0 Jan 28 16:09:44 crc kubenswrapper[4656]: I0128 16:09:44.528927 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerDied","Data":"315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724"} Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.058072 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.113127 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") pod \"d76256ac-0d22-49d2-ba57-036ac864356f\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.113288 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") pod \"d76256ac-0d22-49d2-ba57-036ac864356f\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.113350 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") pod \"d76256ac-0d22-49d2-ba57-036ac864356f\" (UID: \"d76256ac-0d22-49d2-ba57-036ac864356f\") " Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.114341 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities" (OuterVolumeSpecName: "utilities") pod "d76256ac-0d22-49d2-ba57-036ac864356f" (UID: "d76256ac-0d22-49d2-ba57-036ac864356f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.118505 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w" (OuterVolumeSpecName: "kube-api-access-6h45w") pod "d76256ac-0d22-49d2-ba57-036ac864356f" (UID: "d76256ac-0d22-49d2-ba57-036ac864356f"). InnerVolumeSpecName "kube-api-access-6h45w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.182712 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d76256ac-0d22-49d2-ba57-036ac864356f" (UID: "d76256ac-0d22-49d2-ba57-036ac864356f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.216691 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h45w\" (UniqueName: \"kubernetes.io/projected/d76256ac-0d22-49d2-ba57-036ac864356f-kube-api-access-6h45w\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.216728 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.216740 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76256ac-0d22-49d2-ba57-036ac864356f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.537952 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8c2w" event={"ID":"d76256ac-0d22-49d2-ba57-036ac864356f","Type":"ContainerDied","Data":"1aa788840bf60cd11879a71864d4bb1a7ebaf205bb904dfe6c72a639dfb0bf8d"} Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.538034 4656 scope.go:117] "RemoveContainer" containerID="315c14838ccaa501cafc46becd5cba73d99c86e61e653eaebb694de91a654724" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.538245 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8c2w" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.559290 4656 scope.go:117] "RemoveContainer" containerID="f9d30d085cbc81d644ccb39179effeb6ce91c140b8c38d8e3a58cc5acacebd06" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.568020 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.580885 4656 scope.go:117] "RemoveContainer" containerID="340c910a26d3bd13fb9584ce8111f10337cb97a436c25e510aece355a5a9e46f" Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.603072 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b8c2w"] Jan 28 16:09:45 crc kubenswrapper[4656]: I0128 16:09:45.813772 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w7bws_c50cc4de-dd25-4337-a532-3384d5a87626/control-plane-machine-set-operator/0.log" Jan 28 16:09:46 crc kubenswrapper[4656]: I0128 16:09:46.088685 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gcdpp_44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9/machine-api-operator/0.log" Jan 28 16:09:46 crc kubenswrapper[4656]: I0128 16:09:46.088764 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gcdpp_44ae797b-b3f1-4fe5-bc46-e03b2a9a6fc9/kube-rbac-proxy/0.log" Jan 28 16:09:47 crc kubenswrapper[4656]: I0128 16:09:47.182546 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" path="/var/lib/kubelet/pods/d76256ac-0d22-49d2-ba57-036ac864356f/volumes" Jan 28 16:09:55 crc kubenswrapper[4656]: I0128 16:09:55.171413 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:09:55 crc kubenswrapper[4656]: E0128 16:09:55.172316 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:09:59 crc kubenswrapper[4656]: I0128 16:09:59.687994 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cpqlr_b66b5cb8-91c8-4122-b61a-d2f5f7815d26/cert-manager-controller/0.log" Jan 28 16:09:59 crc kubenswrapper[4656]: I0128 16:09:59.991023 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8bd5j_46693ecf-5a40-4182-aeed-7161923e4016/cert-manager-cainjector/0.log" Jan 28 16:10:00 crc kubenswrapper[4656]: I0128 16:10:00.051416 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-jl8hn_2afeb8d3-6acc-42ee-aa3d-943c0784354c/cert-manager-webhook/0.log" Jan 28 16:10:08 crc kubenswrapper[4656]: I0128 16:10:08.171279 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:10:08 crc kubenswrapper[4656]: E0128 16:10:08.172275 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.282055 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-2bhpr_c74f938e-5184-4ea3-afe5-373ef61c779a/nmstate-console-plugin/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.513386 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-rjhhj_41fa0969-44fb-4cf4-916c-da0dd393a58c/nmstate-handler/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.582288 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xjdms_59adf6ef-2655-44f4-ae3d-91c315439598/kube-rbac-proxy/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.678676 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xjdms_59adf6ef-2655-44f4-ae3d-91c315439598/nmstate-metrics/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.760724 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-bvtmh_66b69ecf-d4cf-452c-a311-93cecb247ab1/nmstate-operator/0.log" Jan 28 16:10:12 crc kubenswrapper[4656]: I0128 16:10:12.880554 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rrl4x_0b2d8d4a-d2ba-4c29-b545-f23070527595/nmstate-webhook/0.log" Jan 28 16:10:22 crc kubenswrapper[4656]: I0128 16:10:22.171179 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:10:22 crc kubenswrapper[4656]: E0128 16:10:22.172004 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:10:35 crc kubenswrapper[4656]: I0128 16:10:35.171364 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:10:35 crc kubenswrapper[4656]: E0128 16:10:35.172260 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.097883 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xgjbl_a0c56151-b07f-4c02-9b4c-0b48c4dd8a03/kube-rbac-proxy/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.174697 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-xgjbl_a0c56151-b07f-4c02-9b4c-0b48c4dd8a03/controller/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.426591 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-nzvcq_555938b5-7504-41e4-9331-7be899491299/frr-k8s-webhook-server/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.563387 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-frr-files/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.851258 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-metrics/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.873542 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-reloader/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.883280 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-frr-files/0.log" Jan 28 16:10:41 crc kubenswrapper[4656]: I0128 16:10:41.922597 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-reloader/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.043072 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-frr-files/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.154498 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-reloader/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.162553 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-metrics/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.239999 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-metrics/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.413687 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-metrics/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.472754 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-reloader/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.527279 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/cp-frr-files/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.547781 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/controller/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.698394 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/frr-metrics/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.705305 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/kube-rbac-proxy/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.781838 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/kube-rbac-proxy-frr/0.log" Jan 28 16:10:42 crc kubenswrapper[4656]: I0128 16:10:42.927828 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/reloader/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.167970 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-dfcddcb8c-gjtgk_93bc850d-d691-43b6-8668-79f21bd350a7/manager/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.241591 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z94g8_e31100cc-2c8a-4682-b6fb-acc4157f7d43/frr/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.346790 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bdb79d58b-gsggf_585f1e9a-4070-4b23-bbab-f29ae7e95cf0/webhook-server/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.521692 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k4qr2_8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3/kube-rbac-proxy/0.log" Jan 28 16:10:43 crc kubenswrapper[4656]: I0128 16:10:43.803750 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k4qr2_8ffb0cda-bbe7-41e1-acdb-fd11fd2e33a3/speaker/0.log" Jan 28 16:10:46 crc kubenswrapper[4656]: I0128 16:10:46.171381 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:10:46 crc kubenswrapper[4656]: E0128 16:10:46.171735 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:10:56 crc kubenswrapper[4656]: I0128 16:10:56.813138 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.076631 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.104900 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.139032 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.322811 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.338517 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.348211 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf7m2d_f46f83b8-e7b1-47eb-a3b6-58f208cf8a4f/extract/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.547037 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.741080 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.748868 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/pull/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.851432 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.984306 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/extract/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.989039 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/util/0.log" Jan 28 16:10:57 crc kubenswrapper[4656]: I0128 16:10:57.992749 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zlfnj_dafc02bf-d18b-4177-afa6-ac17360b54e9/pull/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.179289 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-utilities/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.438465 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-content/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.444657 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-utilities/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.445724 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-content/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.647113 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-content/0.log" Jan 28 16:10:58 crc kubenswrapper[4656]: I0128 16:10:58.663615 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/extract-utilities/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.084672 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l9zl7_769c7d2c-1d96-4056-9165-ebf9a1cefc45/registry-server/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.102320 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-utilities/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.280834 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-utilities/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.389496 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-content/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.580589 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-content/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.805821 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-content/0.log" Jan 28 16:10:59 crc kubenswrapper[4656]: I0128 16:10:59.811516 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/extract-utilities/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.142858 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2xzq6_c48ec7e7-12ff-47f6-9a82-59078f7c2b04/marketplace-operator/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.173034 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:00 crc kubenswrapper[4656]: E0128 16:11:00.173301 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.282822 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-utilities/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.493302 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gsg5_6c5ec616-76e4-4b6f-93ce-ca2dba833b37/registry-server/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.543794 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-utilities/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.569068 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-content/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.572439 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-content/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.859850 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-utilities/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.940681 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/extract-content/0.log" Jan 28 16:11:00 crc kubenswrapper[4656]: I0128 16:11:00.996902 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2b7pm_816a03ab-31e5-4d9a-b66c-3787ac9335a9/registry-server/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.148746 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-utilities/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.342848 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-content/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.346351 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-utilities/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.363758 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-content/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.570715 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-content/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.572589 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/extract-utilities/0.log" Jan 28 16:11:01 crc kubenswrapper[4656]: I0128 16:11:01.745071 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rzhzn_885e9bc9-ca09-4e4e-95ef-7b95d52c8dc9/registry-server/0.log" Jan 28 16:11:12 crc kubenswrapper[4656]: I0128 16:11:12.171552 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:12 crc kubenswrapper[4656]: E0128 16:11:12.172179 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:11:25 crc kubenswrapper[4656]: I0128 16:11:25.170733 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:25 crc kubenswrapper[4656]: E0128 16:11:25.171485 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:11:33 crc kubenswrapper[4656]: I0128 16:11:33.894183 4656 scope.go:117] "RemoveContainer" containerID="25707c1b7aa34daf8d1d203593fed712625cdedcf4f7a415d31a120483a3edfb" Jan 28 16:11:33 crc kubenswrapper[4656]: I0128 16:11:33.937577 4656 scope.go:117] "RemoveContainer" containerID="1023efe98e9294c68f6dd563afee559a68f500f630f7dc52bc58bfe310af0080" Jan 28 16:11:33 crc kubenswrapper[4656]: I0128 16:11:33.973827 4656 scope.go:117] "RemoveContainer" containerID="38f722e1bfe839109981c0f0f0db4eff3febe78c06d24b651cd5373d7bc12663" Jan 28 16:11:38 crc kubenswrapper[4656]: I0128 16:11:38.171286 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:38 crc kubenswrapper[4656]: E0128 16:11:38.172330 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:11:51 crc kubenswrapper[4656]: I0128 16:11:51.180383 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:11:51 crc kubenswrapper[4656]: E0128 16:11:51.181217 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:02 crc kubenswrapper[4656]: I0128 16:12:02.170559 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:12:02 crc kubenswrapper[4656]: E0128 16:12:02.171566 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:17 crc kubenswrapper[4656]: I0128 16:12:17.171864 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:12:17 crc kubenswrapper[4656]: E0128 16:12:17.172662 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:32 crc kubenswrapper[4656]: I0128 16:12:32.171018 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:12:32 crc kubenswrapper[4656]: E0128 16:12:32.173433 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:43 crc kubenswrapper[4656]: I0128 16:12:43.123787 4656 generic.go:334] "Generic (PLEG): container finished" podID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerID="6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b" exitCode=0 Jan 28 16:12:43 crc kubenswrapper[4656]: I0128 16:12:43.123838 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5j5mh/must-gather-llsvb" event={"ID":"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9","Type":"ContainerDied","Data":"6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b"} Jan 28 16:12:43 crc kubenswrapper[4656]: I0128 16:12:43.125182 4656 scope.go:117] "RemoveContainer" containerID="6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b" Jan 28 16:12:44 crc kubenswrapper[4656]: I0128 16:12:44.064000 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5j5mh_must-gather-llsvb_2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/gather/0.log" Jan 28 16:12:47 crc kubenswrapper[4656]: I0128 16:12:47.171132 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:12:47 crc kubenswrapper[4656]: E0128 16:12:47.171898 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:12:52 crc kubenswrapper[4656]: I0128 16:12:52.793507 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:12:52 crc kubenswrapper[4656]: I0128 16:12:52.794394 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5j5mh/must-gather-llsvb" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="copy" containerID="cri-o://f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea" gracePeriod=2 Jan 28 16:12:52 crc kubenswrapper[4656]: I0128 16:12:52.801014 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5j5mh/must-gather-llsvb"] Jan 28 16:12:53 crc kubenswrapper[4656]: E0128 16:12:53.032456 4656 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c1ba1d6_2c0d_4263_a2ae_a744f60f89b9.slice/crio-conmon-f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea.scope\": RecentStats: unable to find data in memory cache]" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.224879 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5j5mh_must-gather-llsvb_2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/copy/0.log" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.225527 4656 generic.go:334] "Generic (PLEG): container finished" podID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerID="f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea" exitCode=143 Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.735048 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5j5mh_must-gather-llsvb_2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/copy/0.log" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.736068 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.866813 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") pod \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.866878 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") pod \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\" (UID: \"2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9\") " Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.882590 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2" (OuterVolumeSpecName: "kube-api-access-w7nj2") pod "2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" (UID: "2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9"). InnerVolumeSpecName "kube-api-access-w7nj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.969347 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7nj2\" (UniqueName: \"kubernetes.io/projected/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-kube-api-access-w7nj2\") on node \"crc\" DevicePath \"\"" Jan 28 16:12:53 crc kubenswrapper[4656]: I0128 16:12:53.992336 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" (UID: "2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.071145 4656 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.235756 4656 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5j5mh_must-gather-llsvb_2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/copy/0.log" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.236141 4656 scope.go:117] "RemoveContainer" containerID="f859a2e2caa0fb544c6c6a44179d086e76e997ea8b2edcf00d53e17d188daeea" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.236199 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5j5mh/must-gather-llsvb" Jan 28 16:12:54 crc kubenswrapper[4656]: I0128 16:12:54.266581 4656 scope.go:117] "RemoveContainer" containerID="6baa9b5e64a74d90d0dfc29c5dbed884e993ea7992e2b03687f98f3fda85bb0b" Jan 28 16:12:55 crc kubenswrapper[4656]: I0128 16:12:55.180765 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" path="/var/lib/kubelet/pods/2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9/volumes" Jan 28 16:13:01 crc kubenswrapper[4656]: I0128 16:13:01.176457 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:01 crc kubenswrapper[4656]: E0128 16:13:01.177374 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:13:14 crc kubenswrapper[4656]: I0128 16:13:14.170561 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:14 crc kubenswrapper[4656]: E0128 16:13:14.172545 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:13:26 crc kubenswrapper[4656]: I0128 16:13:26.170847 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:26 crc kubenswrapper[4656]: E0128 16:13:26.171675 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:13:37 crc kubenswrapper[4656]: I0128 16:13:37.170834 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:37 crc kubenswrapper[4656]: E0128 16:13:37.171730 4656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8llkk_openshift-machine-config-operator(06d899c2-5ac5-4760-b71a-06c970fdc9fc)\"" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" podUID="06d899c2-5ac5-4760-b71a-06c970fdc9fc" Jan 28 16:13:48 crc kubenswrapper[4656]: I0128 16:13:48.171467 4656 scope.go:117] "RemoveContainer" containerID="75d4f3e147d5be0cac47144822764f0cb7bde7c41b319a3188ea67bf64282e6c" Jan 28 16:13:48 crc kubenswrapper[4656]: I0128 16:13:48.731008 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8llkk" event={"ID":"06d899c2-5ac5-4760-b71a-06c970fdc9fc","Type":"ContainerStarted","Data":"d7999db764e2d77e39711d0b36b7eb67c612bc23b60021126111ecb7d71dcd8b"} Jan 28 16:14:34 crc kubenswrapper[4656]: I0128 16:14:34.090770 4656 scope.go:117] "RemoveContainer" containerID="56f14a3cf7039489472a1e9af12718c79dc3de07799d984f8c3f92c6f2afe477" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.164801 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997"] Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166042 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166062 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166083 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="gather" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166090 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="gather" Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166114 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166122 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166148 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="copy" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166179 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="copy" Jan 28 16:15:00 crc kubenswrapper[4656]: E0128 16:15:00.166197 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166204 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166464 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="copy" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166500 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1ba1d6-2c0d-4263-a2ae-a744f60f89b9" containerName="gather" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.166515 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76256ac-0d22-49d2-ba57-036ac864356f" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.167330 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.174896 4656 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.175439 4656 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.212872 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997"] Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.354268 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.354323 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.354357 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.455818 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.455893 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.455940 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.456911 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.462493 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.474962 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") pod \"collect-profiles-29493615-7b997\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:00 crc kubenswrapper[4656]: I0128 16:15:00.511526 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:01 crc kubenswrapper[4656]: I0128 16:15:01.005977 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997"] Jan 28 16:15:01 crc kubenswrapper[4656]: W0128 16:15:01.025309 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod184f8a72_6f46_453d_8eda_d5526b224693.slice/crio-37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46 WatchSource:0}: Error finding container 37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46: Status 404 returned error can't find the container with id 37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46 Jan 28 16:15:01 crc kubenswrapper[4656]: I0128 16:15:01.503711 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" event={"ID":"184f8a72-6f46-453d-8eda-d5526b224693","Type":"ContainerStarted","Data":"abaa04e54f286aa11e362c3eecbd9be04d343a82a9f11988bf954f99d0af2ee1"} Jan 28 16:15:01 crc kubenswrapper[4656]: I0128 16:15:01.504948 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" event={"ID":"184f8a72-6f46-453d-8eda-d5526b224693","Type":"ContainerStarted","Data":"37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46"} Jan 28 16:15:01 crc kubenswrapper[4656]: I0128 16:15:01.527094 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" podStartSLOduration=1.5270641999999999 podStartE2EDuration="1.5270642s" podCreationTimestamp="2026-01-28 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:15:01.525376062 +0000 UTC m=+3392.033546866" watchObservedRunningTime="2026-01-28 16:15:01.5270642 +0000 UTC m=+3392.035235004" Jan 28 16:15:02 crc kubenswrapper[4656]: I0128 16:15:02.513997 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" event={"ID":"184f8a72-6f46-453d-8eda-d5526b224693","Type":"ContainerDied","Data":"abaa04e54f286aa11e362c3eecbd9be04d343a82a9f11988bf954f99d0af2ee1"} Jan 28 16:15:02 crc kubenswrapper[4656]: I0128 16:15:02.513950 4656 generic.go:334] "Generic (PLEG): container finished" podID="184f8a72-6f46-453d-8eda-d5526b224693" containerID="abaa04e54f286aa11e362c3eecbd9be04d343a82a9f11988bf954f99d0af2ee1" exitCode=0 Jan 28 16:15:03 crc kubenswrapper[4656]: I0128 16:15:03.856645 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.023275 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") pod \"184f8a72-6f46-453d-8eda-d5526b224693\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.023504 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") pod \"184f8a72-6f46-453d-8eda-d5526b224693\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.023566 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") pod \"184f8a72-6f46-453d-8eda-d5526b224693\" (UID: \"184f8a72-6f46-453d-8eda-d5526b224693\") " Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.024345 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume" (OuterVolumeSpecName: "config-volume") pod "184f8a72-6f46-453d-8eda-d5526b224693" (UID: "184f8a72-6f46-453d-8eda-d5526b224693"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.031212 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t" (OuterVolumeSpecName: "kube-api-access-dqc4t") pod "184f8a72-6f46-453d-8eda-d5526b224693" (UID: "184f8a72-6f46-453d-8eda-d5526b224693"). InnerVolumeSpecName "kube-api-access-dqc4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.035978 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "184f8a72-6f46-453d-8eda-d5526b224693" (UID: "184f8a72-6f46-453d-8eda-d5526b224693"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.125400 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqc4t\" (UniqueName: \"kubernetes.io/projected/184f8a72-6f46-453d-8eda-d5526b224693-kube-api-access-dqc4t\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.125445 4656 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/184f8a72-6f46-453d-8eda-d5526b224693-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.125459 4656 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184f8a72-6f46-453d-8eda-d5526b224693-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.296971 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.303351 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-d2j9n"] Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.529132 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" event={"ID":"184f8a72-6f46-453d-8eda-d5526b224693","Type":"ContainerDied","Data":"37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46"} Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.529223 4656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37773ec7aeb71a870187d561534cf775fa091521097fa3c92759b4ea8afd0d46" Jan 28 16:15:04 crc kubenswrapper[4656]: I0128 16:15:04.529307 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-7b997" Jan 28 16:15:05 crc kubenswrapper[4656]: I0128 16:15:05.181372 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="647c2e48-5d47-46f5-bd41-1512da5aef27" path="/var/lib/kubelet/pods/647c2e48-5d47-46f5-bd41-1512da5aef27/volumes" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.468320 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:18 crc kubenswrapper[4656]: E0128 16:15:18.469378 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184f8a72-6f46-453d-8eda-d5526b224693" containerName="collect-profiles" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.469405 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="184f8a72-6f46-453d-8eda-d5526b224693" containerName="collect-profiles" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.469583 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="184f8a72-6f46-453d-8eda-d5526b224693" containerName="collect-profiles" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.470722 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.484736 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.563670 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.563827 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.564020 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665184 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665279 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665321 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665774 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.665976 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.690475 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") pod \"community-operators-7mnj6\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:18 crc kubenswrapper[4656]: I0128 16:15:18.788682 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:19 crc kubenswrapper[4656]: I0128 16:15:19.351884 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:19 crc kubenswrapper[4656]: I0128 16:15:19.636383 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerStarted","Data":"495b083fb6504c78ffde9f5b26f1d7ed8eed0987c7b2949f27fb691ff953481f"} Jan 28 16:15:20 crc kubenswrapper[4656]: I0128 16:15:20.646480 4656 generic.go:334] "Generic (PLEG): container finished" podID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerID="492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d" exitCode=0 Jan 28 16:15:20 crc kubenswrapper[4656]: I0128 16:15:20.646589 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerDied","Data":"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d"} Jan 28 16:15:20 crc kubenswrapper[4656]: I0128 16:15:20.649678 4656 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:15:22 crc kubenswrapper[4656]: I0128 16:15:22.678707 4656 generic.go:334] "Generic (PLEG): container finished" podID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerID="2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b" exitCode=0 Jan 28 16:15:22 crc kubenswrapper[4656]: I0128 16:15:22.678881 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerDied","Data":"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b"} Jan 28 16:15:23 crc kubenswrapper[4656]: I0128 16:15:23.691435 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerStarted","Data":"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890"} Jan 28 16:15:23 crc kubenswrapper[4656]: I0128 16:15:23.714132 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7mnj6" podStartSLOduration=3.23639231 podStartE2EDuration="5.714106244s" podCreationTimestamp="2026-01-28 16:15:18 +0000 UTC" firstStartedPulling="2026-01-28 16:15:20.64928715 +0000 UTC m=+3411.157457954" lastFinishedPulling="2026-01-28 16:15:23.127001084 +0000 UTC m=+3413.635171888" observedRunningTime="2026-01-28 16:15:23.708895574 +0000 UTC m=+3414.217066378" watchObservedRunningTime="2026-01-28 16:15:23.714106244 +0000 UTC m=+3414.222277038" Jan 28 16:15:28 crc kubenswrapper[4656]: I0128 16:15:28.789060 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:28 crc kubenswrapper[4656]: I0128 16:15:28.789662 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:28 crc kubenswrapper[4656]: I0128 16:15:28.842728 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:29 crc kubenswrapper[4656]: I0128 16:15:29.861681 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:29 crc kubenswrapper[4656]: I0128 16:15:29.909923 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:31 crc kubenswrapper[4656]: I0128 16:15:31.821148 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7mnj6" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="registry-server" containerID="cri-o://73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" gracePeriod=2 Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.273786 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.520075 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") pod \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.520221 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") pod \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.521436 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") pod \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\" (UID: \"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c\") " Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.522401 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities" (OuterVolumeSpecName: "utilities") pod "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" (UID: "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.527496 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q" (OuterVolumeSpecName: "kube-api-access-t246q") pod "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" (UID: "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c"). InnerVolumeSpecName "kube-api-access-t246q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.577141 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" (UID: "0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.623728 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.623768 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.623782 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t246q\" (UniqueName: \"kubernetes.io/projected/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c-kube-api-access-t246q\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831725 4656 generic.go:334] "Generic (PLEG): container finished" podID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerID="73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" exitCode=0 Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831799 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mnj6" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831809 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerDied","Data":"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890"} Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831864 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mnj6" event={"ID":"0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c","Type":"ContainerDied","Data":"495b083fb6504c78ffde9f5b26f1d7ed8eed0987c7b2949f27fb691ff953481f"} Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.831924 4656 scope.go:117] "RemoveContainer" containerID="73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.858023 4656 scope.go:117] "RemoveContainer" containerID="2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.882061 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.889558 4656 scope.go:117] "RemoveContainer" containerID="492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.890228 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7mnj6"] Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.923576 4656 scope.go:117] "RemoveContainer" containerID="73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" Jan 28 16:15:32 crc kubenswrapper[4656]: E0128 16:15:32.924299 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890\": container with ID starting with 73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890 not found: ID does not exist" containerID="73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.924368 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890"} err="failed to get container status \"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890\": rpc error: code = NotFound desc = could not find container \"73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890\": container with ID starting with 73dfa9b4fa746cf037b771b5c5439b59ef943104667aa0dc1af256fbeb3c4890 not found: ID does not exist" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.924409 4656 scope.go:117] "RemoveContainer" containerID="2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b" Jan 28 16:15:32 crc kubenswrapper[4656]: E0128 16:15:32.924707 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b\": container with ID starting with 2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b not found: ID does not exist" containerID="2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.924733 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b"} err="failed to get container status \"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b\": rpc error: code = NotFound desc = could not find container \"2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b\": container with ID starting with 2e7c3b193db474208b00d20cb99188e76b95a9e898b2c524657eaed3fe326e3b not found: ID does not exist" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.924749 4656 scope.go:117] "RemoveContainer" containerID="492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d" Jan 28 16:15:32 crc kubenswrapper[4656]: E0128 16:15:32.925116 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d\": container with ID starting with 492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d not found: ID does not exist" containerID="492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d" Jan 28 16:15:32 crc kubenswrapper[4656]: I0128 16:15:32.925304 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d"} err="failed to get container status \"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d\": rpc error: code = NotFound desc = could not find container \"492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d\": container with ID starting with 492f74aeeb4b463251e6f3cffd4ac313508e7ef7cd2fd6a98868702676a99c9d not found: ID does not exist" Jan 28 16:15:33 crc kubenswrapper[4656]: I0128 16:15:33.182739 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" path="/var/lib/kubelet/pods/0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c/volumes" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.068544 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:34 crc kubenswrapper[4656]: E0128 16:15:34.069091 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="extract-content" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.069109 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="extract-content" Jan 28 16:15:34 crc kubenswrapper[4656]: E0128 16:15:34.069141 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="extract-utilities" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.069152 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="extract-utilities" Jan 28 16:15:34 crc kubenswrapper[4656]: E0128 16:15:34.069214 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="registry-server" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.069224 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="registry-server" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.069467 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9b05ae-2c24-4877-afbc-dc9a7eee7d8c" containerName="registry-server" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.071516 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.076773 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.143644 4656 scope.go:117] "RemoveContainer" containerID="e478d6329d394a8ee6946b81a0192102dc331c4b0fad2b32a9549e2af991b8fd" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.213066 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.213123 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.213279 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.314477 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.314586 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.314965 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.314986 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.315142 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.333301 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") pod \"redhat-marketplace-plxhp\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.443630 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:34 crc kubenswrapper[4656]: I0128 16:15:34.943818 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:34 crc kubenswrapper[4656]: W0128 16:15:34.947880 4656 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6b0b613_5382_46d5_9f5a_b53999ea8ae8.slice/crio-fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658 WatchSource:0}: Error finding container fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658: Status 404 returned error can't find the container with id fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658 Jan 28 16:15:35 crc kubenswrapper[4656]: I0128 16:15:35.870117 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerID="b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963" exitCode=0 Jan 28 16:15:35 crc kubenswrapper[4656]: I0128 16:15:35.870210 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerDied","Data":"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963"} Jan 28 16:15:35 crc kubenswrapper[4656]: I0128 16:15:35.870485 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerStarted","Data":"fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658"} Jan 28 16:15:37 crc kubenswrapper[4656]: I0128 16:15:37.887957 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerID="7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5" exitCode=0 Jan 28 16:15:37 crc kubenswrapper[4656]: I0128 16:15:37.888037 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerDied","Data":"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5"} Jan 28 16:15:38 crc kubenswrapper[4656]: I0128 16:15:38.899208 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerStarted","Data":"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239"} Jan 28 16:15:38 crc kubenswrapper[4656]: I0128 16:15:38.920306 4656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-plxhp" podStartSLOduration=2.336732715 podStartE2EDuration="4.920278656s" podCreationTimestamp="2026-01-28 16:15:34 +0000 UTC" firstStartedPulling="2026-01-28 16:15:35.872207592 +0000 UTC m=+3426.380378396" lastFinishedPulling="2026-01-28 16:15:38.455753543 +0000 UTC m=+3428.963924337" observedRunningTime="2026-01-28 16:15:38.917908928 +0000 UTC m=+3429.426079732" watchObservedRunningTime="2026-01-28 16:15:38.920278656 +0000 UTC m=+3429.428449470" Jan 28 16:15:44 crc kubenswrapper[4656]: I0128 16:15:44.444874 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:44 crc kubenswrapper[4656]: I0128 16:15:44.445552 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:44 crc kubenswrapper[4656]: I0128 16:15:44.495624 4656 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:44 crc kubenswrapper[4656]: I0128 16:15:44.982349 4656 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:45 crc kubenswrapper[4656]: I0128 16:15:45.030762 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:46 crc kubenswrapper[4656]: I0128 16:15:46.956651 4656 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-plxhp" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="registry-server" containerID="cri-o://62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" gracePeriod=2 Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.427552 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.602355 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") pod \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.602525 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") pod \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.602656 4656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") pod \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\" (UID: \"d6b0b613-5382-46d5-9f5a-b53999ea8ae8\") " Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.603418 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities" (OuterVolumeSpecName: "utilities") pod "d6b0b613-5382-46d5-9f5a-b53999ea8ae8" (UID: "d6b0b613-5382-46d5-9f5a-b53999ea8ae8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.608430 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx" (OuterVolumeSpecName: "kube-api-access-4njxx") pod "d6b0b613-5382-46d5-9f5a-b53999ea8ae8" (UID: "d6b0b613-5382-46d5-9f5a-b53999ea8ae8"). InnerVolumeSpecName "kube-api-access-4njxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.630001 4656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6b0b613-5382-46d5-9f5a-b53999ea8ae8" (UID: "d6b0b613-5382-46d5-9f5a-b53999ea8ae8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.704207 4656 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.704247 4656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4njxx\" (UniqueName: \"kubernetes.io/projected/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-kube-api-access-4njxx\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.704262 4656 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b0b613-5382-46d5-9f5a-b53999ea8ae8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.964973 4656 generic.go:334] "Generic (PLEG): container finished" podID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerID="62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" exitCode=0 Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.965021 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerDied","Data":"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239"} Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.965031 4656 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plxhp" Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.965051 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plxhp" event={"ID":"d6b0b613-5382-46d5-9f5a-b53999ea8ae8","Type":"ContainerDied","Data":"fb34719426ec668258b15e9a6104d39934d9ab2a5d597b9c4209ae80d7072658"} Jan 28 16:15:47 crc kubenswrapper[4656]: I0128 16:15:47.965070 4656 scope.go:117] "RemoveContainer" containerID="62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.001065 4656 scope.go:117] "RemoveContainer" containerID="7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.006404 4656 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.014298 4656 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-plxhp"] Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.017993 4656 scope.go:117] "RemoveContainer" containerID="b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.052698 4656 scope.go:117] "RemoveContainer" containerID="62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" Jan 28 16:15:48 crc kubenswrapper[4656]: E0128 16:15:48.053201 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239\": container with ID starting with 62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239 not found: ID does not exist" containerID="62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.053244 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239"} err="failed to get container status \"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239\": rpc error: code = NotFound desc = could not find container \"62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239\": container with ID starting with 62fe2c0d50c8af20424c3318930352ccad2f023fa77d3798973b7b4897866239 not found: ID does not exist" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.053274 4656 scope.go:117] "RemoveContainer" containerID="7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5" Jan 28 16:15:48 crc kubenswrapper[4656]: E0128 16:15:48.053655 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5\": container with ID starting with 7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5 not found: ID does not exist" containerID="7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.053696 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5"} err="failed to get container status \"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5\": rpc error: code = NotFound desc = could not find container \"7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5\": container with ID starting with 7a5826fc4e2cc19c5d8e4fe1efc8cdf3591f210cc01b6bb092b33742628cebb5 not found: ID does not exist" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.053716 4656 scope.go:117] "RemoveContainer" containerID="b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963" Jan 28 16:15:48 crc kubenswrapper[4656]: E0128 16:15:48.054062 4656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963\": container with ID starting with b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963 not found: ID does not exist" containerID="b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963" Jan 28 16:15:48 crc kubenswrapper[4656]: I0128 16:15:48.054094 4656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963"} err="failed to get container status \"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963\": rpc error: code = NotFound desc = could not find container \"b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963\": container with ID starting with b9216c3fb588bf56b3578938fccc3ed0fe196d917056d8965c16c7e4ecc9d963 not found: ID does not exist" Jan 28 16:15:49 crc kubenswrapper[4656]: I0128 16:15:49.181264 4656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" path="/var/lib/kubelet/pods/d6b0b613-5382-46d5-9f5a-b53999ea8ae8/volumes" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.801182 4656 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mbkx9"] Jan 28 16:16:03 crc kubenswrapper[4656]: E0128 16:16:03.804821 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="extract-utilities" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.804925 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="extract-utilities" Jan 28 16:16:03 crc kubenswrapper[4656]: E0128 16:16:03.805003 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="registry-server" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.805064 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="registry-server" Jan 28 16:16:03 crc kubenswrapper[4656]: E0128 16:16:03.805128 4656 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="extract-content" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.805248 4656 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="extract-content" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.805505 4656 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b0b613-5382-46d5-9f5a-b53999ea8ae8" containerName="registry-server" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.807143 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.817700 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbkx9"] Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.887940 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-utilities\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.888121 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-catalog-content\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.888238 4656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vdbw\" (UniqueName: \"kubernetes.io/projected/b6054ffc-ba79-470b-9be7-f3791dc870f4-kube-api-access-4vdbw\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.989741 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-catalog-content\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.989838 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vdbw\" (UniqueName: \"kubernetes.io/projected/b6054ffc-ba79-470b-9be7-f3791dc870f4-kube-api-access-4vdbw\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.989918 4656 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-utilities\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.990356 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-catalog-content\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:03 crc kubenswrapper[4656]: I0128 16:16:03.990369 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6054ffc-ba79-470b-9be7-f3791dc870f4-utilities\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:04 crc kubenswrapper[4656]: I0128 16:16:04.009909 4656 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vdbw\" (UniqueName: \"kubernetes.io/projected/b6054ffc-ba79-470b-9be7-f3791dc870f4-kube-api-access-4vdbw\") pod \"redhat-operators-mbkx9\" (UID: \"b6054ffc-ba79-470b-9be7-f3791dc870f4\") " pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:04 crc kubenswrapper[4656]: I0128 16:16:04.139064 4656 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbkx9" Jan 28 16:16:04 crc kubenswrapper[4656]: I0128 16:16:04.627026 4656 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbkx9"] Jan 28 16:16:05 crc kubenswrapper[4656]: I0128 16:16:05.098675 4656 generic.go:334] "Generic (PLEG): container finished" podID="b6054ffc-ba79-470b-9be7-f3791dc870f4" containerID="ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7" exitCode=0 Jan 28 16:16:05 crc kubenswrapper[4656]: I0128 16:16:05.098712 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerDied","Data":"ae8d56a83413c9afee963ad839a583268c740e6d94db672135aaef76d3ed55b7"} Jan 28 16:16:05 crc kubenswrapper[4656]: I0128 16:16:05.098948 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerStarted","Data":"96dc734f0176253cf0403f2bc3250965b9d2b3d66d80de4e6081d7255f1d1741"} Jan 28 16:16:06 crc kubenswrapper[4656]: I0128 16:16:06.109351 4656 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbkx9" event={"ID":"b6054ffc-ba79-470b-9be7-f3791dc870f4","Type":"ContainerStarted","Data":"93e2394287100e522052d4ba7b3ebc589e045021d50aaab80b3a00b78e37714e"}